Found 8 cores, limiting parallelism with --test.parallel=4 === RUN TestDownloadOnly === RUN TestDownloadOnly/v1.16.0 === RUN TestDownloadOnly/v1.16.0/json-events aaa_download_only_test.go:73: (dbg) Run: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220221082507-6550 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker --container-runtime=docker aaa_download_only_test.go:73: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220221082507-6550 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker --container-runtime=docker: (5.519928772s) === RUN TestDownloadOnly/v1.16.0/preload-exists === RUN TestDownloadOnly/v1.16.0/cached-images aaa_download_only_test.go:123: Preload exists, images won't be cached === RUN TestDownloadOnly/v1.16.0/binaries aaa_download_only_test.go:142: Preload exists, binaries are present within. === RUN TestDownloadOnly/v1.16.0/kubectl aaa_download_only_test.go:158: Test for darwin and windows === RUN TestDownloadOnly/v1.16.0/LogsDuration aaa_download_only_test.go:175: (dbg) Run: out/minikube-linux-amd64 logs -p download-only-20220221082507-6550 aaa_download_only_test.go:175: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220221082507-6550: exit status 85 (80.574565ms) -- stdout -- * * ==> Audit <== * |---------|------|---------|------|---------|------------|----------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|------|---------|------|---------|------------|----------| |---------|------|---------|------|---------|------------|----------| * * ==> Last Start <== * Log file created at: 2022/02/21 08:25:07 Running on machine: ubuntu-20-agent-5 Binary: Built with gc go1.17.7 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0221 08:25:07.341812 6562 out.go:297] Setting OutFile to fd 1 ... I0221 08:25:07.341888 6562 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 08:25:07.341892 6562 out.go:310] Setting ErrFile to fd 2... I0221 08:25:07.341896 6562 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 08:25:07.341985 6562 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/bin W0221 08:25:07.342094 6562 root.go:293] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/config/config.json: no such file or directory I0221 08:25:07.342352 6562 out.go:304] Setting JSON to true I0221 08:25:07.343186 6562 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":462,"bootTime":1645431446,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"} I0221 08:25:07.343264 6562 start.go:122] virtualization: kvm guest I0221 08:25:07.346130 6562 notify.go:193] Checking for updates... W0221 08:25:07.346151 6562 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball: no such file or directory I0221 08:25:07.348036 6562 driver.go:344] Setting default libvirt URI to qemu:///system I0221 08:25:07.384319 6562 docker.go:132] docker version: linux-20.10.12 I0221 08:25:07.384427 6562 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 08:25:07.772826 6562 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:33 SystemTime:2022-02-21 08:25:07.412517232 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 08:25:07.772952 6562 docker.go:237] overlay module found I0221 08:25:07.774965 6562 start.go:281] selected driver: docker I0221 08:25:07.774978 6562 start.go:798] validating driver "docker" against I0221 08:25:07.775157 6562 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 08:25:07.861920 6562 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:33 SystemTime:2022-02-21 08:25:07.801093202 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 08:25:07.862063 6562 start_flags.go:288] no existing cluster config was found, will generate one from the flags I0221 08:25:07.862590 6562 start_flags.go:369] Using suggested 8000MB memory alloc based on sys=32104MB, container=32104MB I0221 08:25:07.862697 6562 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m I0221 08:25:07.862717 6562 start_flags.go:813] Wait components to verify : map[apiserver:true system_pods:true] I0221 08:25:07.862737 6562 cni.go:93] Creating CNI manager for "" I0221 08:25:07.862745 6562 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0221 08:25:07.862759 6562 start_flags.go:302] config: {Name:download-only-20220221082507-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220221082507-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 08:25:07.864969 6562 cache.go:120] Beginning downloading kic base image for docker with docker I0221 08:25:07.866438 6562 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker I0221 08:25:07.866557 6562 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon I0221 08:25:07.906523 6562 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon, skipping pull I0221 08:25:07.906554 6562 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 exists in daemon, skipping load I0221 08:25:08.056520 6562 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.16.0/preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4 I0221 08:25:08.056553 6562 cache.go:57] Caching tarball of preloaded images I0221 08:25:08.056810 6562 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker I0221 08:25:08.059171 6562 preload.go:238] getting checksum for preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4 ... I0221 08:25:08.251350 6562 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.16.0/preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:0c23f68e9d9de4489f09a530426fd1e3 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4 I0221 08:25:10.812546 6562 preload.go:249] saving checksum for preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4 ... I0221 08:25:10.812636 6562 preload.go:256] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4 ... I0221 08:25:11.727380 6562 cache.go:60] Finished verifying existence of preloaded tar for v1.16.0 on docker I0221 08:25:11.727669 6562 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/download-only-20220221082507-6550/config.json ... I0221 08:25:11.727699 6562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/download-only-20220221082507-6550/config.json: {Name:mkeee4e3cacb9472f15dbfb8f01d43ade0c1140b Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:25:11.727870 6562 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker I0221 08:25:11.728047 6562 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/linux/amd64/v1.16.0/kubectl * * The control plane node "" does not exist. To start a cluster, run: "minikube start -p download-only-20220221082507-6550" -- /stdout -- aaa_download_only_test.go:176: minikube logs failed with error: exit status 85 === RUN TestDownloadOnly/v1.23.4 === RUN TestDownloadOnly/v1.23.4/json-events aaa_download_only_test.go:73: (dbg) Run: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220221082507-6550 --force --alsologtostderr --kubernetes-version=v1.23.4 --container-runtime=docker --driver=docker --container-runtime=docker aaa_download_only_test.go:73: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220221082507-6550 --force --alsologtostderr --kubernetes-version=v1.23.4 --container-runtime=docker --driver=docker --container-runtime=docker: (9.460512296s) === RUN TestDownloadOnly/v1.23.4/preload-exists === RUN TestDownloadOnly/v1.23.4/cached-images aaa_download_only_test.go:123: Preload exists, images won't be cached === RUN TestDownloadOnly/v1.23.4/binaries aaa_download_only_test.go:142: Preload exists, binaries are present within. === RUN TestDownloadOnly/v1.23.4/kubectl aaa_download_only_test.go:158: Test for darwin and windows === RUN TestDownloadOnly/v1.23.4/LogsDuration aaa_download_only_test.go:175: (dbg) Run: out/minikube-linux-amd64 logs -p download-only-20220221082507-6550 aaa_download_only_test.go:175: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220221082507-6550: exit status 85 (70.609988ms) -- stdout -- * * ==> Audit <== * |---------|------|---------|------|---------|------------|----------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|------|---------|------|---------|------------|----------| |---------|------|---------|------|---------|------------|----------| * * ==> Last Start <== * Log file created at: 2022/02/21 08:25:12 Running on machine: ubuntu-20-agent-5 Binary: Built with gc go1.17.7 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0221 08:25:12.949901 6710 out.go:297] Setting OutFile to fd 1 ... I0221 08:25:12.949982 6710 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 08:25:12.949986 6710 out.go:310] Setting ErrFile to fd 2... I0221 08:25:12.949991 6710 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 08:25:12.950094 6710 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/bin W0221 08:25:12.950200 6710 root.go:293] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/config/config.json: no such file or directory I0221 08:25:12.950307 6710 out.go:304] Setting JSON to true I0221 08:25:12.951089 6710 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":467,"bootTime":1645431446,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"} I0221 08:25:12.951164 6710 start.go:122] virtualization: kvm guest I0221 08:25:12.953891 6710 notify.go:193] Checking for updates... I0221 08:25:12.956357 6710 config.go:176] Loaded profile config "download-only-20220221082507-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0 W0221 08:25:12.956407 6710 start.go:706] api.Load failed for download-only-20220221082507-6550: filestore "download-only-20220221082507-6550": Docker machine "download-only-20220221082507-6550" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one. I0221 08:25:12.956451 6710 driver.go:344] Setting default libvirt URI to qemu:///system W0221 08:25:12.956481 6710 start.go:706] api.Load failed for download-only-20220221082507-6550: filestore "download-only-20220221082507-6550": Docker machine "download-only-20220221082507-6550" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one. I0221 08:25:12.993689 6710 docker.go:132] docker version: linux-20.10.12 I0221 08:25:12.993805 6710 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 08:25:13.083467 6710 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:33 SystemTime:2022-02-21 08:25:13.022104163 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 08:25:13.083575 6710 docker.go:237] overlay module found I0221 08:25:13.085798 6710 start.go:281] selected driver: docker I0221 08:25:13.085820 6710 start.go:798] validating driver "docker" against &{Name:download-only-20220221082507-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220221082507-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 08:25:13.086071 6710 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 08:25:13.174487 6710 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:33 SystemTime:2022-02-21 08:25:13.11529272 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 08:25:13.175102 6710 cni.go:93] Creating CNI manager for "" I0221 08:25:13.175117 6710 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0221 08:25:13.175127 6710 start_flags.go:302] config: {Name:download-only-20220221082507-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:download-only-20220221082507-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 08:25:13.177240 6710 cache.go:120] Beginning downloading kic base image for docker with docker I0221 08:25:13.178787 6710 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 08:25:13.178899 6710 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon I0221 08:25:13.220445 6710 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon, skipping pull I0221 08:25:13.220475 6710 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 exists in daemon, skipping load I0221 08:25:13.366694 6710 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.4/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4 I0221 08:25:13.366730 6710 cache.go:57] Caching tarball of preloaded images I0221 08:25:13.367078 6710 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 08:25:13.369423 6710 preload.go:238] getting checksum for preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4 ... I0221 08:25:13.559042 6710 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.4/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4?checksum=md5:a60a5fe29a46acf7752603452100b8a6 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4 * * The control plane node "" does not exist. To start a cluster, run: "minikube start -p download-only-20220221082507-6550" -- /stdout -- aaa_download_only_test.go:176: minikube logs failed with error: exit status 85 === RUN TestDownloadOnly/v1.23.5-rc.0 === RUN TestDownloadOnly/v1.23.5-rc.0/json-events aaa_download_only_test.go:73: (dbg) Run: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220221082507-6550 --force --alsologtostderr --kubernetes-version=v1.23.5-rc.0 --container-runtime=docker --driver=docker --container-runtime=docker aaa_download_only_test.go:73: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220221082507-6550 --force --alsologtostderr --kubernetes-version=v1.23.5-rc.0 --container-runtime=docker --driver=docker --container-runtime=docker: (17.365444552s) === RUN TestDownloadOnly/v1.23.5-rc.0/preload-exists aaa_download_only_test.go:113: No preload image === RUN TestDownloadOnly/v1.23.5-rc.0/cached-images aaa_download_only_test.go:135: expected image file exist at "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.23.5-rc.0" but got error: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.23.5-rc.0: no such file or directory aaa_download_only_test.go:135: expected image file exist at "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.23.5-rc.0" but got error: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.23.5-rc.0: no such file or directory aaa_download_only_test.go:135: expected image file exist at "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.23.5-rc.0" but got error: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.23.5-rc.0: no such file or directory aaa_download_only_test.go:135: expected image file exist at "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.23.5-rc.0" but got error: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.23.5-rc.0: no such file or directory aaa_download_only_test.go:135: expected image file exist at "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/pause_3.6" but got error: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/pause_3.6: no such file or directory aaa_download_only_test.go:135: expected image file exist at "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/etcd_3.5.1-0" but got error: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/etcd_3.5.1-0: no such file or directory aaa_download_only_test.go:135: expected image file exist at "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.6" but got error: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.6: no such file or directory aaa_download_only_test.go:135: expected image file exist at "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" but got error: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5: no such file or directory aaa_download_only_test.go:135: expected image file exist at "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1" but got error: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1: no such file or directory aaa_download_only_test.go:135: expected image file exist at "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7" but got error: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7: no such file or directory === RUN TestDownloadOnly/v1.23.5-rc.0/binaries === RUN TestDownloadOnly/v1.23.5-rc.0/kubectl aaa_download_only_test.go:158: Test for darwin and windows === RUN TestDownloadOnly/v1.23.5-rc.0/LogsDuration aaa_download_only_test.go:175: (dbg) Run: out/minikube-linux-amd64 logs -p download-only-20220221082507-6550 aaa_download_only_test.go:175: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220221082507-6550: exit status 85 (74.306384ms) -- stdout -- * * ==> Audit <== * |---------|------|---------|------|---------|------------|----------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|------|---------|------|---------|------------|----------| |---------|------|---------|------|---------|------------|----------| * * ==> Last Start <== * Log file created at: 2022/02/21 08:25:22 Running on machine: ubuntu-20-agent-5 Binary: Built with gc go1.17.7 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0221 08:25:22.476308 6856 out.go:297] Setting OutFile to fd 1 ... I0221 08:25:22.476399 6856 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 08:25:22.476410 6856 out.go:310] Setting ErrFile to fd 2... I0221 08:25:22.476413 6856 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 08:25:22.476508 6856 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/bin W0221 08:25:22.476615 6856 root.go:293] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/config/config.json: no such file or directory I0221 08:25:22.476716 6856 out.go:304] Setting JSON to true I0221 08:25:22.477427 6856 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":477,"bootTime":1645431446,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"} I0221 08:25:22.477490 6856 start.go:122] virtualization: kvm guest I0221 08:25:22.480003 6856 notify.go:193] Checking for updates... I0221 08:25:22.482145 6856 config.go:176] Loaded profile config "download-only-20220221082507-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 W0221 08:25:22.482197 6856 start.go:706] api.Load failed for download-only-20220221082507-6550: filestore "download-only-20220221082507-6550": Docker machine "download-only-20220221082507-6550" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one. I0221 08:25:22.482234 6856 driver.go:344] Setting default libvirt URI to qemu:///system W0221 08:25:22.482256 6856 start.go:706] api.Load failed for download-only-20220221082507-6550: filestore "download-only-20220221082507-6550": Docker machine "download-only-20220221082507-6550" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one. I0221 08:25:22.517228 6856 docker.go:132] docker version: linux-20.10.12 I0221 08:25:22.517337 6856 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 08:25:22.602412 6856 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:33 SystemTime:2022-02-21 08:25:22.543231344 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 08:25:22.602537 6856 docker.go:237] overlay module found I0221 08:25:22.604631 6856 start.go:281] selected driver: docker I0221 08:25:22.604643 6856 start.go:798] validating driver "docker" against &{Name:download-only-20220221082507-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:download-only-20220221082507-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 08:25:22.604864 6856 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 08:25:22.688087 6856 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:33 SystemTime:2022-02-21 08:25:22.630804808 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 08:25:22.688626 6856 cni.go:93] Creating CNI manager for "" I0221 08:25:22.688642 6856 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0221 08:25:22.688650 6856 start_flags.go:302] config: {Name:download-only-20220221082507-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5-rc.0 ClusterName:download-only-20220221082507-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 08:25:22.690960 6856 cache.go:120] Beginning downloading kic base image for docker with docker I0221 08:25:22.692557 6856 preload.go:132] Checking if preload exists for k8s version v1.23.5-rc.0 and runtime docker I0221 08:25:22.692676 6856 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon I0221 08:25:22.736597 6856 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon, skipping pull I0221 08:25:22.736621 6856 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 exists in daemon, skipping load W0221 08:25:22.837637 6856 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.5-rc.0/preloaded-images-k8s-v17-v1.23.5-rc.0-docker-overlay2-amd64.tar.lz4 status code: 404 I0221 08:25:22.837772 6856 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/download-only-20220221082507-6550/config.json ... I0221 08:25:22.837899 6856 cache.go:107] acquiring lock: {Name:mkae39637d54454769ea96c0928557495a2624a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 08:25:22.837906 6856 cache.go:107] acquiring lock: {Name:mk048af2cde148e8a512f7653817cea4bb1a47e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 08:25:22.837921 6856 cache.go:107] acquiring lock: {Name:mk4db3a52d1f4fba9dc9223f3164cb8742f00f2f Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 08:25:22.838009 6856 preload.go:132] Checking if preload exists for k8s version v1.23.5-rc.0 and runtime docker I0221 08:25:22.838046 6856 cache.go:107] acquiring lock: {Name:mk8eae83c87e69d4f61d57feebab23b9c618f6ed Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 08:25:22.838047 6856 cache.go:107] acquiring lock: {Name:mkf4838fe0f0754a09f1960b33e83e9fd73716a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 08:25:22.838082 6856 cache.go:107] acquiring lock: {Name:mk9f52e4209628388c7268565716f70b6a94e740 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 08:25:22.838097 6856 cache.go:107] acquiring lock: {Name:mkc848fd9c1e80ffd1414dd8603c19c641b3fcb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 08:25:22.838141 6856 cache.go:107] acquiring lock: {Name:mkd0cd2ae3afc8e39e716bbcd5f1e196bdbc0e1b Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 08:25:22.838151 6856 cache.go:107] acquiring lock: {Name:mk8cb7540d8a1bd7faccdcc974630f93843749a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 08:25:22.838002 6856 cache.go:107] acquiring lock: {Name:mk0340c3f1bf4216c7deeea4078501a3da4b3533 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 08:25:22.838335 6856 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.23.5-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.23.5-rc.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/linux/amd64/v1.23.5-rc.0/kubeadm I0221 08:25:22.838332 6856 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.23.5-rc.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.23.5-rc.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/linux/amd64/v1.23.5-rc.0/kubelet I0221 08:25:22.838375 6856 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.23.5-rc.0 I0221 08:25:22.838413 6856 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.23.5-rc.0 I0221 08:25:22.838460 6856 image.go:134] retrieving image: k8s.gcr.io/pause:3.6 I0221 08:25:22.838478 6856 image.go:134] retrieving image: docker.io/kubernetesui/dashboard:v2.3.1 I0221 08:25:22.838485 6856 image.go:134] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.7 I0221 08:25:22.838381 6856 image.go:134] retrieving image: k8s.gcr.io/etcd:3.5.1-0 I0221 08:25:22.838602 6856 image.go:134] retrieving image: k8s.gcr.io/coredns/coredns:v1.8.6 I0221 08:25:22.838706 6856 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.23.5-rc.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.23.5-rc.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/linux/amd64/v1.23.5-rc.0/kubectl I0221 08:25:22.838743 6856 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.23.5-rc.0 I0221 08:25:22.838805 6856 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.23.5-rc.0 I0221 08:25:22.838913 6856 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5 I0221 08:25:22.839659 6856 image.go:180] daemon lookup for k8s.gcr.io/kube-apiserver:v1.23.5-rc.0: Error response from daemon: reference does not exist I0221 08:25:22.839681 6856 image.go:180] daemon lookup for docker.io/kubernetesui/metrics-scraper:v1.0.7: Error response from daemon: reference does not exist I0221 08:25:22.839698 6856 image.go:180] daemon lookup for docker.io/kubernetesui/dashboard:v2.3.1: Error response from daemon: reference does not exist I0221 08:25:22.839738 6856 image.go:180] daemon lookup for k8s.gcr.io/kube-scheduler:v1.23.5-rc.0: Error response from daemon: reference does not exist I0221 08:25:22.839935 6856 image.go:180] daemon lookup for k8s.gcr.io/kube-proxy:v1.23.5-rc.0: Error response from daemon: reference does not exist I0221 08:25:22.840166 6856 image.go:180] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.23.5-rc.0: Error response from daemon: reference does not exist I0221 08:25:22.853290 6856 image.go:176] found k8s.gcr.io/pause:3.6 locally: &{UncompressedImageCore:0xc000010348 lock:{state:0 sema:0} manifest:} I0221 08:25:22.853328 6856 cache.go:161] opening: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 I0221 08:25:22.896570 6856 cache.go:156] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 exists I0221 08:25:22.896616 6856 cache.go:96] cache image "k8s.gcr.io/pause:3.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6" took 58.707968ms I0221 08:25:22.896631 6856 cache.go:80] save to tar file k8s.gcr.io/pause:3.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 succeeded I0221 08:25:23.142313 6856 image.go:176] found gcr.io/k8s-minikube/storage-provisioner:v5 locally: &{UncompressedImageCore:0xc0000102f8 lock:{state:0 sema:0} manifest:} I0221 08:25:23.142361 6856 cache.go:161] opening: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 I0221 08:25:23.277736 6856 image.go:176] found k8s.gcr.io/coredns/coredns:v1.8.6 locally: &{UncompressedImageCore:0xc0007262a8 lock:{state:0 sema:0} manifest:} I0221 08:25:23.277771 6856 cache.go:161] opening: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 I0221 08:25:24.447266 6856 cache.go:161] opening: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.5-rc.0 I0221 08:25:24.512209 6856 cache.go:161] opening: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.5-rc.0 I0221 08:25:24.519076 6856 cache.go:161] opening: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.5-rc.0 I0221 08:25:24.669999 6856 cache.go:161] opening: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.5-rc.0 I0221 08:25:24.813579 6856 cache.go:156] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists I0221 08:25:24.813629 6856 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.975479932s I0221 08:25:24.813652 6856 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded I0221 08:25:25.075060 6856 cache.go:161] opening: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7 I0221 08:25:25.176535 6856 cache.go:161] opening: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1 I0221 08:25:25.474083 6856 cache.go:156] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7 exists I0221 08:25:25.474137 6856 cache.go:96] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.7" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7" took 2.636245671s I0221 08:25:25.474154 6856 cache.go:80] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.7 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7 succeeded I0221 08:25:25.538233 6856 binary.go:76] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.23.5-rc.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.23.5-rc.0/bin/linux/amd64/kubectl.sha256 I0221 08:25:25.803448 6856 cache.go:156] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 exists I0221 08:25:25.803500 6856 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6" took 2.965515438s I0221 08:25:25.803512 6856 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 succeeded I0221 08:25:25.880437 6856 cache.go:156] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1 exists I0221 08:25:25.880487 6856 cache.go:96] cache image "docker.io/kubernetesui/dashboard:v2.3.1" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1" took 3.042406668s I0221 08:25:25.880505 6856 cache.go:80] save to tar file docker.io/kubernetesui/dashboard:v2.3.1 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1 succeeded I0221 08:25:25.921393 6856 image.go:176] found k8s.gcr.io/etcd:3.5.1-0 locally: &{UncompressedImageCore:0xc0001140c0 lock:{state:0 sema:0} manifest:} I0221 08:25:25.921442 6856 cache.go:161] opening: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 I0221 08:25:26.958552 6856 cache.go:156] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.5-rc.0 exists I0221 08:25:26.958604 6856 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.23.5-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.5-rc.0" took 4.120568852s I0221 08:25:26.958622 6856 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.23.5-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.5-rc.0 succeeded I0221 08:25:27.302477 6856 cache.go:156] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.5-rc.0 exists I0221 08:25:27.302518 6856 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.23.5-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.5-rc.0" took 4.464519367s I0221 08:25:27.302529 6856 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.23.5-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.5-rc.0 succeeded I0221 08:25:27.361188 6856 cache.go:156] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.5-rc.0 exists I0221 08:25:27.361247 6856 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.23.5-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.5-rc.0" took 4.523216477s I0221 08:25:27.361264 6856 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.23.5-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.5-rc.0 succeeded I0221 08:25:27.846070 6856 cache.go:156] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.5-rc.0 exists I0221 08:25:27.846126 6856 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.23.5-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.5-rc.0" took 5.008253061s I0221 08:25:27.846144 6856 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.23.5-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.5-rc.0 succeeded * * The control plane node "" does not exist. To start a cluster, run: "minikube start -p download-only-20220221082507-6550" -- /stdout -- aaa_download_only_test.go:176: minikube logs failed with error: exit status 85 === CONT TestDownloadOnly/v1.23.5-rc.0 helpers_test.go:223: -----------------------post-mortem-------------------------------- helpers_test.go:231: ======> post-mortem[TestDownloadOnly/v1.23.5-rc.0]: docker inspect <====== helpers_test.go:232: (dbg) Run: docker inspect download-only-20220221082507-6550 helpers_test.go:232: (dbg) Non-zero exit: docker inspect download-only-20220221082507-6550: exit status 1 (40.845631ms) -- stdout -- [] -- /stdout -- ** stderr ** Error: No such object: download-only-20220221082507-6550 ** /stderr ** helpers_test.go:234: failed to get docker inspect: exit status 1 helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p download-only-20220221082507-6550 -n download-only-20220221082507-6550 helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p download-only-20220221082507-6550 -n download-only-20220221082507-6550: exit status 7 (57.816622ms) -- stdout -- Nonexistent -- /stdout -- helpers_test.go:240: status error: exit status 7 (may be ok) helpers_test.go:242: "download-only-20220221082507-6550" host is not running, skipping log retrieval (state="Nonexistent") === RUN TestDownloadOnly/DeleteAll aaa_download_only_test.go:193: (dbg) Run: out/minikube-linux-amd64 delete --all === RUN TestDownloadOnly/DeleteAlwaysSucceeds aaa_download_only_test.go:205: (dbg) Run: out/minikube-linux-amd64 delete -p download-only-20220221082507-6550 === CONT TestDownloadOnly helpers_test.go:176: Cleaning up "download-only-20220221082507-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p download-only-20220221082507-6550 --- FAIL: TestDownloadOnly (33.58s) --- PASS: TestDownloadOnly/v1.16.0 (5.60s) --- PASS: TestDownloadOnly/v1.16.0/json-events (5.52s) --- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s) --- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s) --- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s) --- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s) --- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s) --- PASS: TestDownloadOnly/v1.23.4 (9.53s) --- PASS: TestDownloadOnly/v1.23.4/json-events (9.46s) --- PASS: TestDownloadOnly/v1.23.4/preload-exists (0.00s) --- SKIP: TestDownloadOnly/v1.23.4/cached-images (0.00s) --- SKIP: TestDownloadOnly/v1.23.4/binaries (0.00s) --- SKIP: TestDownloadOnly/v1.23.4/kubectl (0.00s) --- PASS: TestDownloadOnly/v1.23.4/LogsDuration (0.07s) --- FAIL: TestDownloadOnly/v1.23.5-rc.0 (17.71s) --- PASS: TestDownloadOnly/v1.23.5-rc.0/json-events (17.37s) --- SKIP: TestDownloadOnly/v1.23.5-rc.0/preload-exists (0.17s) --- FAIL: TestDownloadOnly/v1.23.5-rc.0/cached-images (0.00s) --- PASS: TestDownloadOnly/v1.23.5-rc.0/binaries (0.00s) --- SKIP: TestDownloadOnly/v1.23.5-rc.0/kubectl (0.00s) --- PASS: TestDownloadOnly/v1.23.5-rc.0/LogsDuration (0.08s) --- PASS: TestDownloadOnly/DeleteAll (0.33s) --- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.21s) === RUN TestDownloadOnlyKic aaa_download_only_test.go:230: (dbg) Run: out/minikube-linux-amd64 start --download-only -p download-docker-20220221082540-6550 --force --alsologtostderr --driver=docker --container-runtime=docker aaa_download_only_test.go:230: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-20220221082540-6550 --force --alsologtostderr --driver=docker --container-runtime=docker: (26.139289452s) helpers_test.go:176: Cleaning up "download-docker-20220221082540-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p download-docker-20220221082540-6550 --- PASS: TestDownloadOnlyKic (27.45s) === RUN TestBinaryMirror aaa_download_only_test.go:316: (dbg) Run: out/minikube-linux-amd64 start --download-only -p binary-mirror-20220221082608-6550 --alsologtostderr --binary-mirror http://127.0.0.1:46005 --driver=docker --container-runtime=docker helpers_test.go:176: Cleaning up "binary-mirror-20220221082608-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p binary-mirror-20220221082608-6550 --- PASS: TestBinaryMirror (0.86s) === RUN TestOffline === PAUSE TestOffline === RUN TestAddons === RUN TestAddons/Setup addons_test.go:76: (dbg) Run: out/minikube-linux-amd64 start -p addons-20220221082609-6550 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller addons_test.go:76: (dbg) Done: out/minikube-linux-amd64 start -p addons-20220221082609-6550 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m19.980124241s) === RUN TestAddons/parallel === RUN TestAddons/parallel/Registry === PAUSE TestAddons/parallel/Registry === RUN TestAddons/parallel/Ingress === PAUSE TestAddons/parallel/Ingress === RUN TestAddons/parallel/MetricsServer === PAUSE TestAddons/parallel/MetricsServer === RUN TestAddons/parallel/HelmTiller === PAUSE TestAddons/parallel/HelmTiller === RUN TestAddons/parallel/Olm === PAUSE TestAddons/parallel/Olm === RUN TestAddons/parallel/CSI === PAUSE TestAddons/parallel/CSI === CONT TestAddons/parallel/Registry === CONT TestAddons/parallel/Olm === CONT TestAddons/parallel/MetricsServer === CONT TestAddons/parallel/Olm addons_test.go:449: Skipping Olm addon till images are fixed === CONT TestAddons/parallel/CSI === CONT TestAddons/parallel/HelmTiller === CONT TestAddons/parallel/Registry addons_test.go:281: registry stabilized in 16.748295ms === CONT TestAddons/parallel/MetricsServer addons_test.go:358: metrics-server stabilized in 16.198388ms === CONT TestAddons/parallel/HelmTiller addons_test.go:407: tiller-deploy stabilized in 16.102604ms === CONT TestAddons/parallel/Registry addons_test.go:283: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ... === CONT TestAddons/parallel/MetricsServer addons_test.go:360: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ... === CONT TestAddons/parallel/HelmTiller addons_test.go:409: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ... === CONT TestAddons/parallel/Registry helpers_test.go:343: "registry-56gtt" [4b08259b-50fc-4dc8-bc8b-6149e221c3b0] Running === CONT TestAddons/parallel/MetricsServer helpers_test.go:343: "metrics-server-6b76bd68b6-4kxhz" [b0343546-e7d8-45c2-b499-0687d6368039] Running === CONT TestAddons/parallel/HelmTiller helpers_test.go:343: "tiller-deploy-6d67d5465d-4rdms" [ff60f50a-a3f2-4595-8a20-4bec721efbda] Running === CONT TestAddons/parallel/CSI addons_test.go:512: csi-hostpath-driver pods stabilized in 47.495762ms addons_test.go:515: (dbg) Run: kubectl --context addons-20220221082609-6550 create -f testdata/csi-hostpath-driver/pvc.yaml addons_test.go:515: (dbg) Done: kubectl --context addons-20220221082609-6550 create -f testdata/csi-hostpath-driver/pvc.yaml: (1.069500662s) addons_test.go:520: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ... helpers_test.go:393: (dbg) Run: kubectl --context addons-20220221082609-6550 get pvc hpvc -o jsonpath={.status.phase} -n default addons_test.go:525: (dbg) Run: kubectl --context addons-20220221082609-6550 create -f testdata/csi-hostpath-driver/pv-pod.yaml addons_test.go:530: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ... helpers_test.go:343: "task-pv-pod" [2ef4973c-40c1-4215-902e-2748e4ff2d8d] Pending helpers_test.go:343: "task-pv-pod" [2ef4973c-40c1-4215-902e-2748e4ff2d8d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container]) === CONT TestAddons/parallel/Registry addons_test.go:283: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.027979176s addons_test.go:286: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ... === CONT TestAddons/parallel/MetricsServer addons_test.go:360: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.030445691s addons_test.go:366: (dbg) Run: kubectl --context addons-20220221082609-6550 top pods -n kube-system === CONT TestAddons/parallel/HelmTiller addons_test.go:409: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.031495757s addons_test.go:424: (dbg) Run: kubectl --context addons-20220221082609-6550 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version === CONT TestAddons/parallel/Registry helpers_test.go:343: "registry-proxy-pfpv4" [f3d3b5d3-2b5b-4921-8973-afd1444b4bc1] Running === CONT TestAddons/parallel/MetricsServer addons_test.go:383: (dbg) Run: out/minikube-linux-amd64 -p addons-20220221082609-6550 addons disable metrics-server --alsologtostderr -v=1 === CONT TestAddons/parallel/Ingress addons_test.go:163: (dbg) Run: kubectl --context addons-20220221082609-6550 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s addons_test.go:183: (dbg) Run: kubectl --context addons-20220221082609-6550 replace --force -f testdata/nginx-ingress-v1.yaml addons_test.go:196: (dbg) Run: kubectl --context addons-20220221082609-6550 replace --force -f testdata/nginx-pod-svc.yaml addons_test.go:201: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ... helpers_test.go:343: "nginx" [19f0ad74-9545-41a0-91ac-ada04e4b7059] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx]) === CONT TestAddons/parallel/Registry addons_test.go:286: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.008285011s addons_test.go:291: (dbg) Run: kubectl --context addons-20220221082609-6550 delete po -l run=registry-test --now addons_test.go:296: (dbg) Run: kubectl --context addons-20220221082609-6550 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local" === CONT TestAddons/parallel/CSI helpers_test.go:343: "task-pv-pod" [2ef4973c-40c1-4215-902e-2748e4ff2d8d] Running === CONT TestAddons/parallel/HelmTiller addons_test.go:424: (dbg) Done: kubectl --context addons-20220221082609-6550 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (11.205122066s) addons_test.go:441: (dbg) Run: out/minikube-linux-amd64 -p addons-20220221082609-6550 addons disable helm-tiller --alsologtostderr -v=1 === CONT TestAddons/parallel/CSI addons_test.go:530: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 17.005763864s addons_test.go:535: (dbg) Run: kubectl --context addons-20220221082609-6550 create -f testdata/csi-hostpath-driver/snapshot.yaml addons_test.go:540: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ... helpers_test.go:418: (dbg) Run: kubectl --context addons-20220221082609-6550 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default helpers_test.go:426: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: helpers_test.go:418: (dbg) Run: kubectl --context addons-20220221082609-6550 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default addons_test.go:545: (dbg) Run: kubectl --context addons-20220221082609-6550 delete pod task-pv-pod addons_test.go:551: (dbg) Run: kubectl --context addons-20220221082609-6550 delete pvc hpvc addons_test.go:557: (dbg) Run: kubectl --context addons-20220221082609-6550 create -f testdata/csi-hostpath-driver/pvc-restore.yaml addons_test.go:562: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ... helpers_test.go:393: (dbg) Run: kubectl --context addons-20220221082609-6550 get pvc hpvc-restore -o jsonpath={.status.phase} -n default addons_test.go:567: (dbg) Run: kubectl --context addons-20220221082609-6550 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml addons_test.go:572: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ... helpers_test.go:343: "task-pv-pod-restore" [2618d2cb-500e-40c8-abc7-8750e4a9f5d7] Pending === CONT TestAddons/parallel/Ingress helpers_test.go:343: "nginx" [19f0ad74-9545-41a0-91ac-ada04e4b7059] Running === CONT TestAddons/parallel/CSI helpers_test.go:343: "task-pv-pod-restore" [2618d2cb-500e-40c8-abc7-8750e4a9f5d7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container]) === CONT TestAddons/parallel/Registry addons_test.go:296: (dbg) Done: kubectl --context addons-20220221082609-6550 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (14.24052981s) addons_test.go:310: (dbg) Run: out/minikube-linux-amd64 -p addons-20220221082609-6550 ip 2022/02/21 08:28:53 [DEBUG] GET http://192.168.49.2:5000 addons_test.go:339: (dbg) Run: out/minikube-linux-amd64 -p addons-20220221082609-6550 addons disable registry --alsologtostderr -v=1 === CONT TestAddons/parallel/Ingress addons_test.go:201: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 19.006461357s addons_test.go:213: (dbg) Run: out/minikube-linux-amd64 -p addons-20220221082609-6550 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'" addons_test.go:237: (dbg) Run: kubectl --context addons-20220221082609-6550 replace --force -f testdata/ingress-dns-example-v1.yaml addons_test.go:242: (dbg) Run: out/minikube-linux-amd64 -p addons-20220221082609-6550 ip addons_test.go:248: (dbg) Run: nslookup hello-john.test 192.168.49.2 addons_test.go:257: (dbg) Run: out/minikube-linux-amd64 -p addons-20220221082609-6550 addons disable ingress-dns --alsologtostderr -v=1 addons_test.go:257: (dbg) Done: out/minikube-linux-amd64 -p addons-20220221082609-6550 addons disable ingress-dns --alsologtostderr -v=1: (1.406711293s) addons_test.go:262: (dbg) Run: out/minikube-linux-amd64 -p addons-20220221082609-6550 addons disable ingress --alsologtostderr -v=1 === CONT TestAddons/parallel/CSI helpers_test.go:343: "task-pv-pod-restore" [2618d2cb-500e-40c8-abc7-8750e4a9f5d7] Running addons_test.go:572: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 14.005775275s addons_test.go:577: (dbg) Run: kubectl --context addons-20220221082609-6550 delete pod task-pv-pod-restore addons_test.go:581: (dbg) Run: kubectl --context addons-20220221082609-6550 delete pvc hpvc-restore addons_test.go:585: (dbg) Run: kubectl --context addons-20220221082609-6550 delete volumesnapshot new-snapshot-demo addons_test.go:589: (dbg) Run: out/minikube-linux-amd64 -p addons-20220221082609-6550 addons disable csi-hostpath-driver --alsologtostderr -v=1 === CONT TestAddons/parallel/Ingress addons_test.go:262: (dbg) Done: out/minikube-linux-amd64 -p addons-20220221082609-6550 addons disable ingress --alsologtostderr -v=1: (7.538188737s) === CONT TestAddons/parallel/CSI addons_test.go:589: (dbg) Done: out/minikube-linux-amd64 -p addons-20220221082609-6550 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.964678772s) addons_test.go:593: (dbg) Run: out/minikube-linux-amd64 -p addons-20220221082609-6550 addons disable volumesnapshots --alsologtostderr -v=1 === RUN TestAddons/serial === RUN TestAddons/serial/GCPAuth addons_test.go:604: (dbg) Run: kubectl --context addons-20220221082609-6550 create -f testdata/busybox.yaml addons_test.go:610: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ... helpers_test.go:343: "busybox" [f4763a2d-b9a0-49c0-bc75-d380dc7e43c3] Pending helpers_test.go:343: "busybox" [f4763a2d-b9a0-49c0-bc75-d380dc7e43c3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox]) helpers_test.go:343: "busybox" [f4763a2d-b9a0-49c0-bc75-d380dc7e43c3] Running addons_test.go:610: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 10.007454769s addons_test.go:616: (dbg) Run: kubectl --context addons-20220221082609-6550 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS" addons_test.go:653: (dbg) Run: kubectl --context addons-20220221082609-6550 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT" addons_test.go:666: (dbg) Run: out/minikube-linux-amd64 -p addons-20220221082609-6550 addons disable gcp-auth --alsologtostderr -v=1 addons_test.go:666: (dbg) Done: out/minikube-linux-amd64 -p addons-20220221082609-6550 addons disable gcp-auth --alsologtostderr -v=1: (5.978927494s) addons_test.go:682: (dbg) Run: out/minikube-linux-amd64 -p addons-20220221082609-6550 addons enable gcp-auth addons_test.go:682: (dbg) Done: out/minikube-linux-amd64 -p addons-20220221082609-6550 addons enable gcp-auth: (2.941873728s) addons_test.go:688: (dbg) Run: kubectl --context addons-20220221082609-6550 apply -f testdata/private-image.yaml addons_test.go:695: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ... helpers_test.go:343: "private-image-7f8587d5b7-v5swk" [2bca746d-92d7-4f3e-9085-55213ac943c7] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image]) helpers_test.go:343: "private-image-7f8587d5b7-v5swk" [2bca746d-92d7-4f3e-9085-55213ac943c7] Running addons_test.go:695: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image healthy within 16.005459979s addons_test.go:701: (dbg) Run: kubectl --context addons-20220221082609-6550 apply -f testdata/private-image-eu.yaml addons_test.go:706: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ... helpers_test.go:343: "private-image-eu-869dcfd8c7-fnqrl" [9fc4ad6a-90e3-4714-8f47-5bb03276ea3d] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu]) helpers_test.go:343: "private-image-eu-869dcfd8c7-fnqrl" [9fc4ad6a-90e3-4714-8f47-5bb03276ea3d] Running addons_test.go:706: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image-eu healthy within 10.005501653s === RUN TestAddons/StoppedEnableDisable addons_test.go:133: (dbg) Run: out/minikube-linux-amd64 stop -p addons-20220221082609-6550 addons_test.go:133: (dbg) Done: out/minikube-linux-amd64 stop -p addons-20220221082609-6550: (11.143853848s) addons_test.go:137: (dbg) Run: out/minikube-linux-amd64 addons enable dashboard -p addons-20220221082609-6550 addons_test.go:141: (dbg) Run: out/minikube-linux-amd64 addons disable dashboard -p addons-20220221082609-6550 === CONT TestAddons helpers_test.go:176: Cleaning up "addons-20220221082609-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p addons-20220221082609-6550 helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p addons-20220221082609-6550: (2.949461749s) --- PASS: TestAddons (243.50s) --- PASS: TestAddons/Setup (139.98s) --- PASS: TestAddons/parallel (0.00s) --- SKIP: TestAddons/parallel/Olm (0.00s) --- PASS: TestAddons/parallel/MetricsServer (5.67s) --- PASS: TestAddons/parallel/HelmTiller (16.58s) --- PASS: TestAddons/parallel/Registry (25.08s) --- PASS: TestAddons/parallel/Ingress (29.90s) --- PASS: TestAddons/parallel/CSI (42.88s) --- PASS: TestAddons/serial (46.35s) --- PASS: TestAddons/serial/GCPAuth (46.35s) --- PASS: TestAddons/StoppedEnableDisable (11.34s) === RUN TestCertOptions === PAUSE TestCertOptions === RUN TestCertExpiration === PAUSE TestCertExpiration === RUN TestDockerFlags === PAUSE TestDockerFlags === RUN TestForceSystemdFlag === PAUSE TestForceSystemdFlag === RUN TestForceSystemdEnv === PAUSE TestForceSystemdEnv === RUN TestKVMDriverInstallOrUpdate === PAUSE TestKVMDriverInstallOrUpdate === RUN TestHyperKitDriverInstallOrUpdate driver_install_or_update_test.go:114: Skip if not darwin. --- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s) === RUN TestHyperkitDriverSkipUpgrade driver_install_or_update_test.go:187: Skip if not darwin. --- SKIP: TestHyperkitDriverSkipUpgrade (0.00s) === RUN TestErrorSpam === RUN TestErrorSpam/setup error_spam_test.go:79: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20220221083012-6550 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220221083012-6550 --driver=docker --container-runtime=docker error_spam_test.go:79: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20220221083012-6550 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220221083012-6550 --driver=docker --container-runtime=docker: (25.996812242s) error_spam_test.go:89: acceptable stderr: "! Your cgroup does not allow setting memory." === RUN TestErrorSpam/start error_spam_test.go:214: Cleaning up 1 logfile(s) ... error_spam_test.go:157: (dbg) Run: out/minikube-linux-amd64 -p nospam-20220221083012-6550 --log_dir /tmp/nospam-20220221083012-6550 start --dry-run error_spam_test.go:157: (dbg) Run: out/minikube-linux-amd64 -p nospam-20220221083012-6550 --log_dir /tmp/nospam-20220221083012-6550 start --dry-run error_spam_test.go:180: (dbg) Run: out/minikube-linux-amd64 -p nospam-20220221083012-6550 --log_dir /tmp/nospam-20220221083012-6550 start --dry-run === RUN TestErrorSpam/status error_spam_test.go:214: Cleaning up 0 logfile(s) ... error_spam_test.go:157: (dbg) Run: out/minikube-linux-amd64 -p nospam-20220221083012-6550 --log_dir /tmp/nospam-20220221083012-6550 status error_spam_test.go:157: (dbg) Run: out/minikube-linux-amd64 -p nospam-20220221083012-6550 --log_dir /tmp/nospam-20220221083012-6550 status error_spam_test.go:180: (dbg) Run: out/minikube-linux-amd64 -p nospam-20220221083012-6550 --log_dir /tmp/nospam-20220221083012-6550 status === RUN TestErrorSpam/pause error_spam_test.go:214: Cleaning up 0 logfile(s) ... error_spam_test.go:157: (dbg) Run: out/minikube-linux-amd64 -p nospam-20220221083012-6550 --log_dir /tmp/nospam-20220221083012-6550 pause error_spam_test.go:157: (dbg) Run: out/minikube-linux-amd64 -p nospam-20220221083012-6550 --log_dir /tmp/nospam-20220221083012-6550 pause error_spam_test.go:180: (dbg) Run: out/minikube-linux-amd64 -p nospam-20220221083012-6550 --log_dir /tmp/nospam-20220221083012-6550 pause === RUN TestErrorSpam/unpause error_spam_test.go:214: Cleaning up 0 logfile(s) ... error_spam_test.go:157: (dbg) Run: out/minikube-linux-amd64 -p nospam-20220221083012-6550 --log_dir /tmp/nospam-20220221083012-6550 unpause error_spam_test.go:157: (dbg) Run: out/minikube-linux-amd64 -p nospam-20220221083012-6550 --log_dir /tmp/nospam-20220221083012-6550 unpause error_spam_test.go:180: (dbg) Run: out/minikube-linux-amd64 -p nospam-20220221083012-6550 --log_dir /tmp/nospam-20220221083012-6550 unpause === RUN TestErrorSpam/stop error_spam_test.go:214: Cleaning up 0 logfile(s) ... error_spam_test.go:157: (dbg) Run: out/minikube-linux-amd64 -p nospam-20220221083012-6550 --log_dir /tmp/nospam-20220221083012-6550 stop error_spam_test.go:157: (dbg) Done: out/minikube-linux-amd64 -p nospam-20220221083012-6550 --log_dir /tmp/nospam-20220221083012-6550 stop: (10.697195648s) error_spam_test.go:157: (dbg) Run: out/minikube-linux-amd64 -p nospam-20220221083012-6550 --log_dir /tmp/nospam-20220221083012-6550 stop error_spam_test.go:180: (dbg) Run: out/minikube-linux-amd64 -p nospam-20220221083012-6550 --log_dir /tmp/nospam-20220221083012-6550 stop === CONT TestErrorSpam helpers_test.go:176: Cleaning up "nospam-20220221083012-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p nospam-20220221083012-6550 helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p nospam-20220221083012-6550: (1.881598081s) --- PASS: TestErrorSpam (43.93s) --- PASS: TestErrorSpam/setup (26.00s) --- PASS: TestErrorSpam/start (0.89s) --- PASS: TestErrorSpam/status (1.14s) --- PASS: TestErrorSpam/pause (1.47s) --- PASS: TestErrorSpam/unpause (1.59s) --- PASS: TestErrorSpam/stop (10.96s) === RUN TestFunctional === RUN TestFunctional/serial === RUN TestFunctional/serial/CopySyncFile functional_test.go:1722: local sync path: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/test/nested/copy/6550/hosts === RUN TestFunctional/serial/StartWithProxy functional_test.go:2104: (dbg) Run: out/minikube-linux-amd64 start -p functional-20220221083056-6550 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=docker functional_test.go:2104: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220221083056-6550 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=docker: (42.728392204s) === RUN TestFunctional/serial/AuditLog === RUN TestFunctional/serial/SoftStart functional_test.go:655: (dbg) Run: out/minikube-linux-amd64 start -p functional-20220221083056-6550 --alsologtostderr -v=8 functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220221083056-6550 --alsologtostderr -v=8: (5.59696837s) functional_test.go:659: soft start took 5.597535193s for "functional-20220221083056-6550" cluster. === RUN TestFunctional/serial/KubeContext functional_test.go:677: (dbg) Run: kubectl config current-context === RUN TestFunctional/serial/KubectlGetPods functional_test.go:692: (dbg) Run: kubectl --context functional-20220221083056-6550 get po -A === RUN TestFunctional/serial/CacheCmd === RUN TestFunctional/serial/CacheCmd/cache === RUN TestFunctional/serial/CacheCmd/cache/add_remote functional_test.go:1050: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 cache add k8s.gcr.io/pause:3.1 functional_test.go:1050: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 cache add k8s.gcr.io/pause:3.3 functional_test.go:1050: (dbg) Done: out/minikube-linux-amd64 -p functional-20220221083056-6550 cache add k8s.gcr.io/pause:3.3: (4.091155504s) functional_test.go:1050: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 cache add k8s.gcr.io/pause:latest functional_test.go:1050: (dbg) Done: out/minikube-linux-amd64 -p functional-20220221083056-6550 cache add k8s.gcr.io/pause:latest: (3.668109034s) === RUN TestFunctional/serial/CacheCmd/cache/add_local functional_test.go:1081: (dbg) Run: docker build -t minikube-local-cache-test:functional-20220221083056-6550 /tmp/functional-20220221083056-65501516765280 functional_test.go:1093: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 cache add minikube-local-cache-test:functional-20220221083056-6550 functional_test.go:1093: (dbg) Done: out/minikube-linux-amd64 -p functional-20220221083056-6550 cache add minikube-local-cache-test:functional-20220221083056-6550: (2.358855828s) functional_test.go:1098: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 cache delete minikube-local-cache-test:functional-20220221083056-6550 functional_test.go:1087: (dbg) Run: docker rmi minikube-local-cache-test:functional-20220221083056-6550 === RUN TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 functional_test.go:1106: (dbg) Run: out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3 === RUN TestFunctional/serial/CacheCmd/cache/list functional_test.go:1114: (dbg) Run: out/minikube-linux-amd64 cache list === RUN TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node functional_test.go:1128: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh sudo crictl images === RUN TestFunctional/serial/CacheCmd/cache/cache_reload functional_test.go:1151: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh sudo docker rmi k8s.gcr.io/pause:latest functional_test.go:1157: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh sudo crictl inspecti k8s.gcr.io/pause:latest functional_test.go:1157: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (355.914503ms) -- stdout -- FATA[0000] no such image "k8s.gcr.io/pause:latest" present -- /stdout -- ** stderr ** ssh: Process exited with status 1 ** /stderr ** functional_test.go:1162: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 cache reload functional_test.go:1162: (dbg) Done: out/minikube-linux-amd64 -p functional-20220221083056-6550 cache reload: (1.716988871s) functional_test.go:1167: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh sudo crictl inspecti k8s.gcr.io/pause:latest === RUN TestFunctional/serial/CacheCmd/cache/delete functional_test.go:1176: (dbg) Run: out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1 functional_test.go:1176: (dbg) Run: out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest === RUN TestFunctional/serial/MinikubeKubectlCmd functional_test.go:712: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 kubectl -- --context functional-20220221083056-6550 get pods === RUN TestFunctional/serial/MinikubeKubectlCmdDirectly functional_test.go:737: (dbg) Run: out/kubectl --context functional-20220221083056-6550 get pods === RUN TestFunctional/serial/ExtraConfig functional_test.go:753: (dbg) Run: out/minikube-linux-amd64 start -p functional-20220221083056-6550 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220221083056-6550 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (28.101472357s) functional_test.go:757: restart took 28.101584553s for "functional-20220221083056-6550" cluster. === RUN TestFunctional/serial/ComponentHealth functional_test.go:811: (dbg) Run: kubectl --context functional-20220221083056-6550 get po -l tier=control-plane -n kube-system -o=json functional_test.go:826: etcd phase: Running functional_test.go:836: etcd status: Ready functional_test.go:826: kube-apiserver phase: Running functional_test.go:836: kube-apiserver status: Ready functional_test.go:826: kube-controller-manager phase: Running functional_test.go:836: kube-controller-manager status: Ready functional_test.go:826: kube-scheduler phase: Running functional_test.go:836: kube-scheduler status: Ready === RUN TestFunctional/serial/LogsCmd functional_test.go:1240: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 logs functional_test.go:1240: (dbg) Done: out/minikube-linux-amd64 -p functional-20220221083056-6550 logs: (1.29824247s) === RUN TestFunctional/serial/LogsFileCmd functional_test.go:1257: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 logs --file /tmp/functional-20220221083056-65504180110779/logs.txt functional_test.go:1257: (dbg) Done: out/minikube-linux-amd64 -p functional-20220221083056-6550 logs --file /tmp/functional-20220221083056-65504180110779/logs.txt: (1.284081106s) === RUN TestFunctional/parallel === RUN TestFunctional/parallel/ConfigCmd === PAUSE TestFunctional/parallel/ConfigCmd === RUN TestFunctional/parallel/DashboardCmd === PAUSE TestFunctional/parallel/DashboardCmd === RUN TestFunctional/parallel/DryRun === PAUSE TestFunctional/parallel/DryRun === RUN TestFunctional/parallel/InternationalLanguage === PAUSE TestFunctional/parallel/InternationalLanguage === RUN TestFunctional/parallel/StatusCmd === PAUSE TestFunctional/parallel/StatusCmd === RUN TestFunctional/parallel/MountCmd === PAUSE TestFunctional/parallel/MountCmd === RUN TestFunctional/parallel/ProfileCmd === PAUSE TestFunctional/parallel/ProfileCmd === RUN TestFunctional/parallel/ServiceCmd === PAUSE TestFunctional/parallel/ServiceCmd === RUN TestFunctional/parallel/AddonsCmd === PAUSE TestFunctional/parallel/AddonsCmd === RUN TestFunctional/parallel/PersistentVolumeClaim === PAUSE TestFunctional/parallel/PersistentVolumeClaim === RUN TestFunctional/parallel/TunnelCmd === PAUSE TestFunctional/parallel/TunnelCmd === RUN TestFunctional/parallel/SSHCmd === PAUSE TestFunctional/parallel/SSHCmd === RUN TestFunctional/parallel/CpCmd === PAUSE TestFunctional/parallel/CpCmd === RUN TestFunctional/parallel/MySQL === PAUSE TestFunctional/parallel/MySQL === RUN TestFunctional/parallel/FileSync === PAUSE TestFunctional/parallel/FileSync === RUN TestFunctional/parallel/CertSync === PAUSE TestFunctional/parallel/CertSync === RUN TestFunctional/parallel/UpdateContextCmd === PAUSE TestFunctional/parallel/UpdateContextCmd === RUN TestFunctional/parallel/DockerEnv === PAUSE TestFunctional/parallel/DockerEnv === RUN TestFunctional/parallel/PodmanEnv === PAUSE TestFunctional/parallel/PodmanEnv === RUN TestFunctional/parallel/NodeLabels === PAUSE TestFunctional/parallel/NodeLabels === RUN TestFunctional/parallel/ImageCommands === PAUSE TestFunctional/parallel/ImageCommands === RUN TestFunctional/parallel/NonActiveRuntimeDisabled === PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled === RUN TestFunctional/parallel/Version === PAUSE TestFunctional/parallel/Version === CONT TestFunctional/parallel/ConfigCmd functional_test.go:1203: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 config unset cpus === CONT TestFunctional/parallel/CpCmd helpers_test.go:555: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 cp testdata/cp-test.txt /home/docker/cp-test.txt === CONT TestFunctional/parallel/ProfileCmd === CONT TestFunctional/parallel/PersistentVolumeClaim === RUN TestFunctional/parallel/ProfileCmd/profile_not_create functional_test.go:1280: (dbg) Run: out/minikube-linux-amd64 profile lis === CONT TestFunctional/parallel/PersistentVolumeClaim functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ... helpers_test.go:343: "storage-provisioner" [7d6bc60d-337e-47c1-9813-9053e6331422] Running === CONT TestFunctional/parallel/ConfigCmd functional_test.go:1203: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 config get cpus === CONT TestFunctional/parallel/ProfileCmd/profile_not_create functional_test.go:1285: (dbg) Run: out/minikube-linux-amd64 profile list --output json === CONT TestFunctional/parallel/ConfigCmd functional_test.go:1203: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220221083056-6550 config get cpus: exit status 14 (67.20848ms) ** stderr ** Error: specified key could not be found in config ** /stderr ** functional_test.go:1203: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 config set cpus 2 functional_test.go:1203: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 config get cpus functional_test.go:1203: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 config unset cpus functional_test.go:1203: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 config get cpus === CONT TestFunctional/parallel/CpCmd helpers_test.go:533: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh -n functional-20220221083056-6550 "sudo cat /home/docker/cp-test.txt" === CONT TestFunctional/parallel/ConfigCmd functional_test.go:1203: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220221083056-6550 config get cpus: exit status 14 (70.936264ms) ** stderr ** Error: specified key could not be found in config ** /stderr ** === CONT TestFunctional/parallel/Version === RUN TestFunctional/parallel/Version/short === PAUSE TestFunctional/parallel/Version/short === RUN TestFunctional/parallel/Version/components === PAUSE TestFunctional/parallel/Version/components === CONT TestFunctional/parallel/NonActiveRuntimeDisabled functional_test.go:1894: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh "sudo systemctl is-active crio" === RUN TestFunctional/parallel/ProfileCmd/profile_list functional_test.go:1320: (dbg) Run: out/minikube-linux-amd64 profile list === CONT TestFunctional/parallel/CpCmd helpers_test.go:555: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 cp functional-20220221083056-6550:/home/docker/cp-test.txt /tmp/mk_test3108556767/cp-test.txt === CONT TestFunctional/parallel/NonActiveRuntimeDisabled functional_test.go:1894: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh "sudo systemctl is-active crio": exit status 1 (380.879029ms) -- stdout -- inactive -- /stdout -- ** stderr ** ssh: Process exited with status 3 ** /stderr ** === CONT TestFunctional/parallel/ImageCommands === RUN TestFunctional/parallel/ImageCommands/ImageListShort === PAUSE TestFunctional/parallel/ImageCommands/ImageListShort === RUN TestFunctional/parallel/ImageCommands/ImageListTable === PAUSE TestFunctional/parallel/ImageCommands/ImageListTable === RUN TestFunctional/parallel/ImageCommands/ImageListJson === PAUSE TestFunctional/parallel/ImageCommands/ImageListJson === RUN TestFunctional/parallel/ImageCommands/ImageListYaml === PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml === RUN TestFunctional/parallel/ImageCommands/ImageBuild === PAUSE TestFunctional/parallel/ImageCommands/ImageBuild === RUN TestFunctional/parallel/ImageCommands/Setup functional_test.go:338: (dbg) Run: docker pull gcr.io/google-containers/addon-resizer:1.8.8 === CONT TestFunctional/parallel/ProfileCmd/profile_list functional_test.go:1325: Took "421.581534ms" to run "out/minikube-linux-amd64 profile list" functional_test.go:1334: (dbg) Run: out/minikube-linux-amd64 profile list -l functional_test.go:1339: Took "76.321037ms" to run "out/minikube-linux-amd64 profile list -l" === RUN TestFunctional/parallel/ProfileCmd/profile_json_output functional_test.go:1371: (dbg) Run: out/minikube-linux-amd64 profile list -o json === CONT TestFunctional/parallel/CpCmd helpers_test.go:533: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh -n functional-20220221083056-6550 "sudo cat /home/docker/cp-test.txt" === CONT TestFunctional/parallel/ProfileCmd/profile_json_output functional_test.go:1376: Took "404.770268ms" to run "out/minikube-linux-amd64 profile list -o json" functional_test.go:1384: (dbg) Run: out/minikube-linux-amd64 profile list -o json --light functional_test.go:1389: Took "62.659602ms" to run "out/minikube-linux-amd64 profile list -o json --light" === CONT TestFunctional/parallel/NodeLabels functional_test.go:215: (dbg) Run: kubectl --context functional-20220221083056-6550 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'" === CONT TestFunctional/parallel/PodmanEnv functional_test.go:546: only validate podman env with docker container runtime, currently testing docker === CONT TestFunctional/parallel/DockerEnv === RUN TestFunctional/parallel/DockerEnv/bash functional_test.go:495: (dbg) Run: /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-20220221083056-6550 docker-env) && out/minikube-linux-amd64 status -p functional-20220221083056-6550" === CONT TestFunctional/parallel/UpdateContextCmd === RUN TestFunctional/parallel/UpdateContextCmd/no_changes === PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes === RUN TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster === PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster === RUN TestFunctional/parallel/UpdateContextCmd/no_clusters === PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters === CONT TestFunctional/parallel/CertSync functional_test.go:1839: Checking for existence of /etc/ssl/certs/6550.pem within VM functional_test.go:1840: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh "sudo cat /etc/ssl/certs/6550.pem" functional_test.go:1839: Checking for existence of /usr/share/ca-certificates/6550.pem within VM functional_test.go:1840: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh "sudo cat /usr/share/ca-certificates/6550.pem" functional_test.go:1839: Checking for existence of /etc/ssl/certs/51391683.0 within VM functional_test.go:1840: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh "sudo cat /etc/ssl/certs/51391683.0" === CONT TestFunctional/parallel/DockerEnv/bash functional_test.go:518: (dbg) Run: /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-20220221083056-6550 docker-env) && docker images" === CONT TestFunctional/parallel/CertSync functional_test.go:1866: Checking for existence of /etc/ssl/certs/65502.pem within VM functional_test.go:1867: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh "sudo cat /etc/ssl/certs/65502.pem" === CONT TestFunctional/parallel/FileSync functional_test.go:1796: Checking for existence of /etc/test/nested/copy/6550/hosts within VM functional_test.go:1798: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh "sudo cat /etc/test/nested/copy/6550/hosts" === CONT TestFunctional/parallel/CertSync functional_test.go:1866: Checking for existence of /usr/share/ca-certificates/65502.pem within VM functional_test.go:1867: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh "sudo cat /usr/share/ca-certificates/65502.pem" === CONT TestFunctional/parallel/FileSync functional_test.go:1803: file sync test content: Test file for checking file sync process === CONT TestFunctional/parallel/MySQL functional_test.go:1660: (dbg) Run: kubectl --context functional-20220221083056-6550 replace --force -f testdata/mysql.yaml === CONT TestFunctional/parallel/ImageCommands/Setup functional_test.go:338: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.591385032s) functional_test.go:343: (dbg) Run: docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220221083056-6550 === CONT TestFunctional/parallel/CertSync functional_test.go:1866: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM functional_test.go:1867: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0" === RUN TestFunctional/parallel/ImageCommands/ImageLoadDaemon functional_test.go:351: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220221083056-6550 === CONT TestFunctional/parallel/MySQL functional_test.go:1666: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ... === CONT TestFunctional/parallel/AddonsCmd functional_test.go:1556: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 addons list functional_test.go:1568: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 addons list -o json === CONT TestFunctional/parallel/SSHCmd functional_test.go:1591: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh "echo hello" === CONT TestFunctional/parallel/MySQL helpers_test.go:343: "mysql-b87c45988-59kjd" [1787aad8-0cdf-47b7-9708-ae1a4f17fb25] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql]) === CONT TestFunctional/parallel/SSHCmd functional_test.go:1608: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh "cat /etc/hostname" === CONT TestFunctional/parallel/ServiceCmd functional_test.go:1439: (dbg) Run: kubectl --context functional-20220221083056-6550 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8 functional_test.go:1445: (dbg) Run: kubectl --context functional-20220221083056-6550 expose deployment hello-node --type=NodePort --port=8080 functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ... === CONT TestFunctional/parallel/PersistentVolumeClaim functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.013160786s === CONT TestFunctional/parallel/ServiceCmd helpers_test.go:343: "hello-node-54fbb85-blt7n" [3c8202a0-8fc0-4c7f-98e5-c78ab64afc47] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver]) === CONT TestFunctional/parallel/PersistentVolumeClaim functional_test_pvc_test.go:50: (dbg) Run: kubectl --context functional-20220221083056-6550 get storageclass -o=json functional_test_pvc_test.go:70: (dbg) Run: kubectl --context functional-20220221083056-6550 apply -f testdata/storage-provisioner/pvc.yaml functional_test_pvc_test.go:77: (dbg) Run: kubectl --context functional-20220221083056-6550 get pvc myclaim -o=json functional_test_pvc_test.go:126: (dbg) Run: kubectl --context functional-20220221083056-6550 apply -f testdata/storage-provisioner/pod.yaml functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ... helpers_test.go:343: "sp-pod" [29b617b5-acf8-4e56-8e39-7968cf045069] Pending helpers_test.go:343: "sp-pod" [29b617b5-acf8-4e56-8e39-7968cf045069] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend]) === CONT TestFunctional/parallel/ImageCommands/ImageLoadDaemon functional_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p functional-20220221083056-6550 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220221083056-6550: (3.455943179s) functional_test.go:444: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 image ls === RUN TestFunctional/parallel/ImageCommands/ImageReloadDaemon functional_test.go:361: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220221083056-6550 functional_test.go:361: (dbg) Done: out/minikube-linux-amd64 -p functional-20220221083056-6550 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220221083056-6550: (2.804634501s) functional_test.go:444: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 image ls === RUN TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon functional_test.go:231: (dbg) Run: docker pull gcr.io/google-containers/addon-resizer:1.8.9 functional_test.go:231: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.15632346s) functional_test.go:236: (dbg) Run: docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220221083056-6550 functional_test.go:241: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220221083056-6550 === CONT TestFunctional/parallel/MySQL helpers_test.go:343: "mysql-b87c45988-59kjd" [1787aad8-0cdf-47b7-9708-ae1a4f17fb25] Running === CONT TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon functional_test.go:241: (dbg) Done: out/minikube-linux-amd64 -p functional-20220221083056-6550 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220221083056-6550: (5.886549361s) functional_test.go:444: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 image ls === RUN TestFunctional/parallel/ImageCommands/ImageSaveToFile functional_test.go:376: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 image save gcr.io/google-containers/addon-resizer:functional-20220221083056-6550 /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar functional_test.go:376: (dbg) Done: out/minikube-linux-amd64 -p functional-20220221083056-6550 image save gcr.io/google-containers/addon-resizer:functional-20220221083056-6550 /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar: (2.116742269s) === RUN TestFunctional/parallel/ImageCommands/ImageRemove functional_test.go:388: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 image rm gcr.io/google-containers/addon-resizer:functional-20220221083056-6550 functional_test.go:444: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 image ls === RUN TestFunctional/parallel/ImageCommands/ImageLoadFromFile functional_test.go:405: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 image load /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar functional_test.go:444: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 image ls === RUN TestFunctional/parallel/ImageCommands/ImageSaveDaemon functional_test.go:415: (dbg) Run: docker rmi gcr.io/google-containers/addon-resizer:functional-20220221083056-6550 functional_test.go:420: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220221083056-6550 === CONT TestFunctional/parallel/ServiceCmd helpers_test.go:343: "hello-node-54fbb85-blt7n" [3c8202a0-8fc0-4c7f-98e5-c78ab64afc47] Running === CONT TestFunctional/parallel/MySQL functional_test.go:1666: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.007237507s functional_test.go:1674: (dbg) Run: kubectl --context functional-20220221083056-6550 exec mysql-b87c45988-59kjd -- mysql -ppassword -e "show databases;" functional_test.go:1674: (dbg) Non-zero exit: kubectl --context functional-20220221083056-6550 exec mysql-b87c45988-59kjd -- mysql -ppassword -e "show databases;": exit status 1 (144.319167ms) ** stderr ** mysql: [Warning] Using a password on the command line interface can be insecure. ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES) command terminated with exit code 1 ** /stderr ** functional_test.go:1674: (dbg) Run: kubectl --context functional-20220221083056-6550 exec mysql-b87c45988-59kjd -- mysql -ppassword -e "show databases;" functional_test.go:1674: (dbg) Non-zero exit: kubectl --context functional-20220221083056-6550 exec mysql-b87c45988-59kjd -- mysql -ppassword -e "show databases;": exit status 1 (286.921636ms) ** stderr ** mysql: [Warning] Using a password on the command line interface can be insecure. ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES) command terminated with exit code 1 ** /stderr ** === CONT TestFunctional/parallel/ImageCommands/ImageSaveDaemon functional_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p functional-20220221083056-6550 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220221083056-6550: (2.649272259s) functional_test.go:425: (dbg) Run: docker image inspect gcr.io/google-containers/addon-resizer:functional-20220221083056-6550 === CONT TestFunctional/parallel/TunnelCmd === RUN TestFunctional/parallel/TunnelCmd/serial === RUN TestFunctional/parallel/TunnelCmd/serial/StartTunnel functional_test_tunnel_test.go:128: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20220221083056-6550 tunnel --alsologtostderr] === RUN TestFunctional/parallel/TunnelCmd/serial/WaitService === RUN TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup functional_test_tunnel_test.go:148: (dbg) Run: kubectl --context functional-20220221083056-6550 apply -f testdata/testsvc.yaml functional_test_tunnel_test.go:152: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ... helpers_test.go:343: "nginx-svc" [205d70b3-6416-4912-865f-b41c560c2497] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx]) === CONT TestFunctional/parallel/MySQL functional_test.go:1674: (dbg) Run: kubectl --context functional-20220221083056-6550 exec mysql-b87c45988-59kjd -- mysql -ppassword -e "show databases;" functional_test.go:1674: (dbg) Non-zero exit: kubectl --context functional-20220221083056-6550 exec mysql-b87c45988-59kjd -- mysql -ppassword -e "show databases;": exit status 1 (508.566811ms) ** stderr ** mysql: [Warning] Using a password on the command line interface can be insecure. ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES) command terminated with exit code 1 ** /stderr ** === CONT TestFunctional/parallel/ServiceCmd functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 22.024660159s functional_test.go:1455: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 service list === CONT TestFunctional/parallel/MySQL functional_test.go:1674: (dbg) Run: kubectl --context functional-20220221083056-6550 exec mysql-b87c45988-59kjd -- mysql -ppassword -e "show databases;" functional_test.go:1674: (dbg) Non-zero exit: kubectl --context functional-20220221083056-6550 exec mysql-b87c45988-59kjd -- mysql -ppassword -e "show databases;": exit status 1 (227.296646ms) ** stderr ** mysql: [Warning] Using a password on the command line interface can be insecure. ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) command terminated with exit code 1 ** /stderr ** === CONT TestFunctional/parallel/ServiceCmd functional_test.go:1468: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 service --namespace=default --https --url hello-node functional_test.go:1484: found endpoint: https://192.168.49.2:30450 functional_test.go:1495: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 service hello-node --url --format={{.IP}} functional_test.go:1504: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 service hello-node --url functional_test.go:1510: found endpoint for hello-node: http://192.168.49.2:30450 functional_test.go:1521: Attempting to fetch http://192.168.49.2:30450 ... functional_test.go:1541: http://192.168.49.2:30450: success! body: Hostname: hello-node-54fbb85-blt7n Pod Information: -no pod information available- Server values: server_version=nginx: 1.13.3 - lua: 10008 Request Information: client_address=172.17.0.1 method=GET real path=/ query= request_version=1.1 request_uri=http://192.168.49.2:8080/ Request Headers: accept-encoding=gzip host=192.168.49.2:30450 user-agent=Go-http-client/1.1 Request Body: -no body in request- === CONT TestFunctional/parallel/InternationalLanguage functional_test.go:1021: (dbg) Run: out/minikube-linux-amd64 start -p functional-20220221083056-6550 --dry-run --memory 250MB --alsologtostderr --driver=docker --container-runtime=docker functional_test.go:1021: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220221083056-6550 --dry-run --memory 250MB --alsologtostderr --driver=docker --container-runtime=docker: exit status 23 (217.050144ms) -- stdout -- * [functional-20220221083056-6550] minikube v1.25.1 sur Ubuntu 20.04 (kvm/amd64) - MINIKUBE_LOCATION=13641 - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube - MINIKUBE_BIN=out/minikube-linux-amd64 * Utilisation du pilote docker basé sur le profil existant - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities -- /stdout -- ** stderr ** I0221 08:32:59.703556 43256 out.go:297] Setting OutFile to fd 1 ... I0221 08:32:59.703633 43256 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 08:32:59.703648 43256 out.go:310] Setting ErrFile to fd 2... I0221 08:32:59.703654 43256 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 08:32:59.703812 43256 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/bin I0221 08:32:59.704055 43256 out.go:304] Setting JSON to false I0221 08:32:59.705399 43256 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":934,"bootTime":1645431446,"procs":484,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"} I0221 08:32:59.705474 43256 start.go:122] virtualization: kvm guest I0221 08:32:59.709172 43256 out.go:176] * [functional-20220221083056-6550] minikube v1.25.1 sur Ubuntu 20.04 (kvm/amd64) I0221 08:32:59.710616 43256 out.go:176] - MINIKUBE_LOCATION=13641 I0221 08:32:59.712103 43256 out.go:176] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true I0221 08:32:59.713633 43256 out.go:176] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 08:32:59.715091 43256 out.go:176] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube I0221 08:32:59.716393 43256 out.go:176] - MINIKUBE_BIN=out/minikube-linux-amd64 I0221 08:32:59.716881 43256 config.go:176] Loaded profile config "functional-20220221083056-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 08:32:59.717278 43256 driver.go:344] Setting default libvirt URI to qemu:///system I0221 08:32:59.758927 43256 docker.go:132] docker version: linux-20.10.12 I0221 08:32:59.759019 43256 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 08:32:59.849011 43256 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:68 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2022-02-21 08:32:59.787874044 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 08:32:59.849193 43256 docker.go:237] overlay module found I0221 08:32:59.852683 43256 out.go:176] * Utilisation du pilote docker basé sur le profil existant I0221 08:32:59.852710 43256 start.go:281] selected driver: docker I0221 08:32:59.852717 43256 start.go:798] validating driver "docker" against &{Name:functional-20220221083056-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:functional-20220221083056-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 08:32:59.852842 43256 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} W0221 08:32:59.852887 43256 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. W0221 08:32:59.852915 43256 out.go:241] ! Votre groupe de contrôle ne permet pas de définir la mémoire. ! Votre groupe de contrôle ne permet pas de définir la mémoire. I0221 08:32:59.855562 43256 out.go:176] - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities I0221 08:32:59.857565 43256 out.go:176] W0221 08:32:59.857693 43256 out.go:241] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo I0221 08:32:59.859175 43256 out.go:176] ** /stderr ** === CONT TestFunctional/parallel/MountCmd === RUN TestFunctional/parallel/MountCmd/any-port functional_test_mount_test.go:76: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220221083056-6550 /tmp/mounttest4202658807:/mount-9p --alsologtostderr -v=1] functional_test_mount_test.go:110: wrote "test-1645432379861750446" to /tmp/mounttest4202658807/created-by-test functional_test_mount_test.go:110: wrote "test-1645432379861750446" to /tmp/mounttest4202658807/created-by-test-removed-by-pod functional_test_mount_test.go:110: wrote "test-1645432379861750446" to /tmp/mounttest4202658807/test-1645432379861750446 functional_test_mount_test.go:118: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh "findmnt -T /mount-9p | grep 9p" === CONT TestFunctional/parallel/PersistentVolumeClaim helpers_test.go:343: "sp-pod" [29b617b5-acf8-4e56-8e39-7968cf045069] Running === CONT TestFunctional/parallel/MountCmd/any-port functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (377.607743ms) ** stderr ** ssh: Process exited with status 1 ** /stderr ** functional_test_mount_test.go:118: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh "findmnt -T /mount-9p | grep 9p" functional_test_mount_test.go:132: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh -- ls -la /mount-9p === CONT TestFunctional/parallel/MySQL functional_test.go:1674: (dbg) Run: kubectl --context functional-20220221083056-6550 exec mysql-b87c45988-59kjd -- mysql -ppassword -e "show databases;" === CONT TestFunctional/parallel/MountCmd/any-port functional_test_mount_test.go:136: guest mount directory contents total 2 -rw-r--r-- 1 docker docker 24 Feb 21 08:32 created-by-test -rw-r--r-- 1 docker docker 24 Feb 21 08:32 created-by-test-removed-by-pod -rw-r--r-- 1 docker docker 24 Feb 21 08:32 test-1645432379861750446 functional_test_mount_test.go:140: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh cat /mount-9p/test-1645432379861750446 === CONT TestFunctional/parallel/StatusCmd functional_test.go:855: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 status === CONT TestFunctional/parallel/MountCmd/any-port functional_test_mount_test.go:151: (dbg) Run: kubectl --context functional-20220221083056-6550 replace --force -f testdata/busybox-mount-test.yaml === CONT TestFunctional/parallel/StatusCmd functional_test.go:861: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}} === CONT TestFunctional/parallel/MountCmd/any-port functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ... helpers_test.go:343: "busybox-mount" [bcd85413-3f74-4a93-9045-92d8780ba4c0] Pending === CONT TestFunctional/parallel/StatusCmd functional_test.go:873: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 status -o json === CONT TestFunctional/parallel/DryRun functional_test.go:975: (dbg) Run: out/minikube-linux-amd64 start -p functional-20220221083056-6550 --dry-run --memory 250MB --alsologtostderr --driver=docker --container-runtime=docker === CONT TestFunctional/parallel/MountCmd/any-port helpers_test.go:343: "busybox-mount" [bcd85413-3f74-4a93-9045-92d8780ba4c0] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger]) === CONT TestFunctional/parallel/DryRun functional_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220221083056-6550 --dry-run --memory 250MB --alsologtostderr --driver=docker --container-runtime=docker: exit status 23 (210.186354ms) -- stdout -- * [functional-20220221083056-6550] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64) - MINIKUBE_LOCATION=13641 - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube - MINIKUBE_BIN=out/minikube-linux-amd64 * Using the docker driver based on existing profile - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities -- /stdout -- ** stderr ** I0221 08:33:03.104412 44475 out.go:297] Setting OutFile to fd 1 ... I0221 08:33:03.104519 44475 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 08:33:03.104539 44475 out.go:310] Setting ErrFile to fd 2... I0221 08:33:03.104548 44475 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 08:33:03.104689 44475 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/bin I0221 08:33:03.104964 44475 out.go:304] Setting JSON to false I0221 08:33:03.106501 44475 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":937,"bootTime":1645431446,"procs":499,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"} I0221 08:33:03.106707 44475 start.go:122] virtualization: kvm guest I0221 08:33:03.109623 44475 out.go:176] * [functional-20220221083056-6550] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64) I0221 08:33:03.111155 44475 out.go:176] - MINIKUBE_LOCATION=13641 I0221 08:33:03.112596 44475 out.go:176] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true I0221 08:33:03.114079 44475 out.go:176] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 08:33:03.115505 44475 out.go:176] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube I0221 08:33:03.116811 44475 out.go:176] - MINIKUBE_BIN=out/minikube-linux-amd64 I0221 08:33:03.117265 44475 config.go:176] Loaded profile config "functional-20220221083056-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 08:33:03.117638 44475 driver.go:344] Setting default libvirt URI to qemu:///system I0221 08:33:03.156349 44475 docker.go:132] docker version: linux-20.10.12 I0221 08:33:03.156447 44475 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 08:33:03.244084 44475 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:68 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2022-02-21 08:33:03.185348421 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 08:33:03.244186 44475 docker.go:237] overlay module found I0221 08:33:03.247262 44475 out.go:176] * Using the docker driver based on existing profile I0221 08:33:03.247286 44475 start.go:281] selected driver: docker I0221 08:33:03.247291 44475 start.go:798] validating driver "docker" against &{Name:functional-20220221083056-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:functional-20220221083056-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 08:33:03.247390 44475 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} W0221 08:33:03.247424 44475 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. W0221 08:33:03.247442 44475 out.go:241] ! Your cgroup does not allow setting memory. ! Your cgroup does not allow setting memory. I0221 08:33:03.249584 44475 out.go:176] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities I0221 08:33:03.251447 44475 out.go:176] W0221 08:33:03.251527 44475 out.go:241] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB I0221 08:33:03.252895 44475 out.go:176] ** /stderr ** functional_test.go:992: (dbg) Run: out/minikube-linux-amd64 start -p functional-20220221083056-6550 --dry-run --alsologtostderr -v=1 --driver=docker --container-runtime=docker === CONT TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup helpers_test.go:343: "nginx-svc" [205d70b3-6416-4912-865f-b41c560c2497] Running === CONT TestFunctional/parallel/DashboardCmd functional_test.go:906: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220221083056-6550 --alsologtostderr -v=1] === CONT TestFunctional/parallel/PersistentVolumeClaim functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 29.006507897s functional_test_pvc_test.go:101: (dbg) Run: kubectl --context functional-20220221083056-6550 exec sp-pod -- touch /tmp/mount/foo functional_test_pvc_test.go:107: (dbg) Run: kubectl --context functional-20220221083056-6550 delete -f testdata/storage-provisioner/pod.yaml === CONT TestFunctional/parallel/MountCmd/any-port helpers_test.go:343: "busybox-mount" [bcd85413-3f74-4a93-9045-92d8780ba4c0] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.0066182s functional_test_mount_test.go:172: (dbg) Run: kubectl --context functional-20220221083056-6550 logs busybox-mount functional_test_mount_test.go:184: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh stat /mount-9p/created-by-test functional_test_mount_test.go:184: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh stat /mount-9p/created-by-pod functional_test_mount_test.go:93: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh "sudo umount -f /mount-9p" === CONT TestFunctional/parallel/PersistentVolumeClaim functional_test_pvc_test.go:107: (dbg) Done: kubectl --context functional-20220221083056-6550 delete -f testdata/storage-provisioner/pod.yaml: (1.800687718s) functional_test_pvc_test.go:126: (dbg) Run: kubectl --context functional-20220221083056-6550 apply -f testdata/storage-provisioner/pod.yaml 2022/02/21 08:33:07 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ === CONT TestFunctional/parallel/DashboardCmd functional_test.go:911: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220221083056-6550 --alsologtostderr -v=1] ... helpers_test.go:507: unable to kill pid 44917: os: process already finished === CONT TestFunctional/parallel/PersistentVolumeClaim functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ... helpers_test.go:343: "sp-pod" [61156257-a4be-4dbe-9ab3-83435cdbf3ba] Pending === CONT TestFunctional/parallel/Version/short functional_test.go:2126: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 version --short === CONT TestFunctional/parallel/Version/components functional_test.go:2140: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 version -o=json --components === CONT TestFunctional/parallel/MountCmd/any-port functional_test_mount_test.go:97: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220221083056-6550 /tmp/mounttest4202658807:/mount-9p --alsologtostderr -v=1] ... === RUN TestFunctional/parallel/MountCmd/specific-port functional_test_mount_test.go:225: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220221083056-6550 /tmp/mounttest1549300676:/mount-9p --alsologtostderr -v=1 --port 46464] functional_test_mount_test.go:255: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh "findmnt -T /mount-9p | grep 9p" functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (371.566894ms) ** stderr ** ssh: Process exited with status 1 ** /stderr ** === CONT TestFunctional/parallel/PersistentVolumeClaim helpers_test.go:343: "sp-pod" [61156257-a4be-4dbe-9ab3-83435cdbf3ba] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend]) === CONT TestFunctional/parallel/Version/components functional_test.go:2140: (dbg) Done: out/minikube-linux-amd64 -p functional-20220221083056-6550 version -o=json --components: (1.167722395s) === CONT TestFunctional/parallel/UpdateContextCmd/no_changes functional_test.go:1986: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 update-context --alsologtostderr -v=2 === CONT TestFunctional/parallel/MountCmd/specific-port functional_test_mount_test.go:255: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh "findmnt -T /mount-9p | grep 9p" === CONT TestFunctional/parallel/UpdateContextCmd/no_clusters functional_test.go:1986: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 update-context --alsologtostderr -v=2 === CONT TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster functional_test.go:1986: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 update-context --alsologtostderr -v=2 === CONT TestFunctional/parallel/MountCmd/specific-port functional_test_mount_test.go:269: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh -- ls -la /mount-9p === CONT TestFunctional/parallel/ImageCommands/ImageListShort functional_test.go:257: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 image ls --format short === CONT TestFunctional/parallel/MountCmd/specific-port functional_test_mount_test.go:273: guest mount directory contents total 0 functional_test_mount_test.go:275: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220221083056-6550 /tmp/mounttest1549300676:/mount-9p --alsologtostderr -v=1 --port 46464] ... === CONT TestFunctional/parallel/ImageCommands/ImageListShort functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220221083056-6550 image ls --format short: k8s.gcr.io/pause:latest k8s.gcr.io/pause:3.6 k8s.gcr.io/pause:3.3 k8s.gcr.io/pause:3.1 k8s.gcr.io/kube-scheduler:v1.23.4 k8s.gcr.io/kube-proxy:v1.23.4 k8s.gcr.io/kube-controller-manager:v1.23.4 k8s.gcr.io/kube-apiserver:v1.23.4 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/echoserver:1.8 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5 gcr.io/k8s-minikube/busybox:1.28.4-glibc gcr.io/google-containers/addon-resizer:functional-20220221083056-6550 docker.io/library/nginx:latest docker.io/library/nginx:alpine docker.io/library/mysql:5.7 docker.io/library/minikube-local-cache-test:functional-20220221083056-6550 docker.io/kubernetesui/metrics-scraper:v1.0.7 docker.io/kubernetesui/dashboard:v2.3.1 === CONT TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup functional_test_tunnel_test.go:152: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 14.00739565s === CONT TestFunctional/parallel/ImageCommands/ImageListYaml functional_test.go:257: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 image ls --format yaml === RUN TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP functional_test_tunnel_test.go:170: (dbg) Run: kubectl --context functional-20220221083056-6550 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip} === CONT TestFunctional/parallel/MountCmd/specific-port functional_test_mount_test.go:276: reading mount text functional_test_mount_test.go:290: done reading mount text functional_test_mount_test.go:242: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh "sudo umount -f /mount-9p" === RUN TestFunctional/parallel/TunnelCmd/serial/AccessDirect functional_test_tunnel_test.go:235: tunnel at http://10.111.40.165 is working! === RUN TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding === RUN TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding === RUN TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding === RUN TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel functional_test_tunnel_test.go:370: (dbg) stopping [out/minikube-linux-amd64 -p functional-20220221083056-6550 tunnel --alsologtostderr] ... === CONT TestFunctional/parallel/ImageCommands/ImageBuild functional_test.go:304: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh pgrep buildkitd === CONT TestFunctional/parallel/ImageCommands/ImageListYaml functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220221083056-6550 image ls --format yaml: - id: 25444908517a59c7cdc07534d3d71c3abe29c66305eb0254c668e881018b4c5f repoDigests: [] repoTags: - k8s.gcr.io/kube-controller-manager:v1.23.4 size: "125000000" - id: aceacb6244f9f92ae8f084a4fbcc78cc67c3d6cb7eda3c6b6773c8d099b05ade repoDigests: [] repoTags: - k8s.gcr.io/kube-scheduler:v1.23.4 size: "53500000" - id: bef258acf10dc257d641c47c3a600c92f87be4b4ce4a5e4752b3eade7533dcd9 repoDigests: [] repoTags: - docker.io/library/nginx:alpine size: "23400000" - id: 1c6a4b268d30a95cea8b7c96515ca66999dd279261276af3c78f6545cfa24573 repoDigests: [] repoTags: - docker.io/library/minikube-local-cache-test:functional-20220221083056-6550 size: "30" - id: 62930710c9634e1f7e53327a68b7b73fb81745817bbc1af3cfc17bba49e2029d repoDigests: [] repoTags: - k8s.gcr.io/kube-apiserver:v1.23.4 size: "135000000" - id: c316d5a335a5cf324b0dc83b3da82d7608724769f6454f6d9a621f3ec2534a5a repoDigests: [] repoTags: - docker.io/library/nginx:latest size: "142000000" - id: 25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d repoDigests: [] repoTags: - k8s.gcr.io/etcd:3.5.1-0 size: "293000000" - id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91 repoDigests: [] repoTags: - gcr.io/google-containers/addon-resizer:functional-20220221083056-6550 size: "32900000" - id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da repoDigests: [] repoTags: - k8s.gcr.io/pause:3.3 size: "683000" - id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e repoDigests: [] repoTags: - k8s.gcr.io/pause:3.1 size: "742000" - id: 2114245ec4d6bfb19bc69c3d72cfc2702f285040ceaf3b3d16deb67e0c3f53de repoDigests: [] repoTags: - k8s.gcr.io/kube-proxy:v1.23.4 size: "112000000" - id: a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03 repoDigests: [] repoTags: - k8s.gcr.io/coredns/coredns:v1.8.6 size: "46800000" - id: e1482a24335a6e76d438ae175f79409004588570d3e5dbb4c8140e025e848570 repoDigests: [] repoTags: - docker.io/kubernetesui/dashboard:v2.3.1 size: "220000000" - id: 7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9 repoDigests: [] repoTags: - docker.io/kubernetesui/metrics-scraper:v1.0.7 size: "34400000" - id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410 repoDigests: [] repoTags: - k8s.gcr.io/echoserver:1.8 size: "95400000" - id: 4181d485f6500849992cc568b26cfe13d98a7a2f995bc49a3e47b2fedf6468fe repoDigests: [] repoTags: - docker.io/library/mysql:5.7 size: "448000000" - id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee repoDigests: [] repoTags: - k8s.gcr.io/pause:3.6 size: "683000" - id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562 repoDigests: [] repoTags: - gcr.io/k8s-minikube/storage-provisioner:v5 size: "31500000" - id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c repoDigests: [] repoTags: - gcr.io/k8s-minikube/busybox:1.28.4-glibc size: "4400000" - id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06 repoDigests: [] repoTags: - k8s.gcr.io/pause:latest size: "240000" === CONT TestFunctional/parallel/ImageCommands/ImageListJson functional_test.go:257: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 image ls --format json === CONT TestFunctional/parallel/MountCmd/specific-port functional_test_mount_test.go:242: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh "sudo umount -f /mount-9p": exit status 1 (385.067231ms) -- stdout -- umount: /mount-9p: not mounted. -- /stdout -- ** stderr ** ssh: Process exited with status 32 ** /stderr ** functional_test_mount_test.go:244: "out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh \"sudo umount -f /mount-9p\"": exit status 1 functional_test_mount_test.go:246: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220221083056-6550 /tmp/mounttest1549300676:/mount-9p --alsologtostderr -v=1 --port 46464] ... === CONT TestFunctional/parallel/ImageCommands/ImageListTable functional_test.go:257: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 image ls --format table === CONT TestFunctional/parallel/ImageCommands/ImageListJson functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220221083056-6550 image ls --format json: [{"id":"2114245ec4d6bfb19bc69c3d72cfc2702f285040ceaf3b3d16deb67e0c3f53de","repoDigests":[],"repoTags":["k8s.gcr.io/kube-proxy:v1.23.4"],"size":"112000000"},{"id":"aceacb6244f9f92ae8f084a4fbcc78cc67c3d6cb7eda3c6b6773c8d099b05ade","repoDigests":[],"repoTags":["k8s.gcr.io/kube-scheduler:v1.23.4"],"size":"53500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"1c6a4b268d30a95cea8b7c96515ca66999dd279261276af3c78f6545cfa24573","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220221083056-6550"],"size":"30"},{"id":"62930710c9634e1f7e53327a68b7b73fb81745817bbc1af3cfc17bba49e2029d","repoDigests":[],"repoTags":["k8s.gcr.io/kube-apiserver:v1.23.4"],"size":"135000000"},{"id":"bef258acf10dc257d641c47c3a600c92f87be4b4ce4a5e4752b3eade7533dcd9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"23400000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"e1482a24335a6e76d438ae175f79409004588570d3e5dbb4c8140e025e848570","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:v2.3.1"],"size":"220000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"4181d485f6500849992cc568b26cfe13d98a7a2f995bc49a3e47b2fedf6468fe","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"448000000"},{"id":"25444908517a59c7cdc07534d3d71c3abe29c66305eb0254c668e881018b4c5f","repoDigests":[],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.23.4"],"size":"125000000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"c316d5a335a5cf324b0dc83b3da82d7608724769f6454f6d9a621f3ec2534a5a","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d","repoDigests":[],"repoTags":["k8s.gcr.io/etcd:3.5.1-0"],"size":"293000000"},{"id":"a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":[],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"46800000"},{"id":"7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:v1.0.7"],"size":"34400000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220221083056-6550"],"size":"32900000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"}] === CONT TestFunctional/parallel/ImageCommands/ImageBuild functional_test.go:304: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh pgrep buildkitd: exit status 1 (429.718149ms) ** stderr ** ssh: Process exited with status 1 ** /stderr ** functional_test.go:311: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 image build -t localhost/my-image:functional-20220221083056-6550 testdata/build === CONT TestFunctional/parallel/ImageCommands/ImageListTable functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220221083056-6550 image ls --format table: |---------------------------------------------|--------------------------------|---------------|--------| | Image | Tag | Image ID | Size | |---------------------------------------------|--------------------------------|---------------|--------| | docker.io/library/minikube-local-cache-test | functional-20220221083056-6550 | 1c6a4b268d30a | 30B | | k8s.gcr.io/kube-scheduler | v1.23.4 | aceacb6244f9f | 53.5MB | | k8s.gcr.io/etcd | 3.5.1-0 | 25f8c7f3da61c | 293MB | | k8s.gcr.io/coredns/coredns | v1.8.6 | a4ca41631cc7a | 46.8MB | | gcr.io/k8s-minikube/busybox | 1.28.4-glibc | 56cc512116c8f | 4.4MB | | docker.io/library/mysql | 5.7 | 4181d485f6500 | 448MB | | k8s.gcr.io/kube-apiserver | v1.23.4 | 62930710c9634 | 135MB | | k8s.gcr.io/kube-controller-manager | v1.23.4 | 25444908517a5 | 125MB | | k8s.gcr.io/pause | 3.6 | 6270bb605e12e | 683kB | | gcr.io/google-containers/addon-resizer | functional-20220221083056-6550 | ffd4cfbbe753e | 32.9MB | | k8s.gcr.io/pause | 3.1 | da86e6ba6ca19 | 742kB | | k8s.gcr.io/echoserver | 1.8 | 82e4c8a736a4f | 95.4MB | | k8s.gcr.io/kube-proxy | v1.23.4 | 2114245ec4d6b | 112MB | | docker.io/library/nginx | alpine | bef258acf10dc | 23.4MB | | docker.io/kubernetesui/metrics-scraper | v1.0.7 | 7801cfc6d5c07 | 34.4MB | | gcr.io/k8s-minikube/storage-provisioner | v5 | 6e38f40d628db | 31.5MB | | k8s.gcr.io/pause | 3.3 | 0184c1613d929 | 683kB | | docker.io/library/nginx | latest | c316d5a335a5c | 142MB | | docker.io/kubernetesui/dashboard | v2.3.1 | e1482a24335a6 | 220MB | | k8s.gcr.io/pause | latest | 350b164e7ae1d | 240kB | |---------------------------------------------|--------------------------------|---------------|--------| === CONT TestFunctional/parallel/PersistentVolumeClaim helpers_test.go:343: "sp-pod" [61156257-a4be-4dbe-9ab3-83435cdbf3ba] Running === CONT TestFunctional/parallel/ImageCommands/ImageBuild functional_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p functional-20220221083056-6550 image build -t localhost/my-image:functional-20220221083056-6550 testdata/build: (2.361877572s) functional_test.go:316: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220221083056-6550 image build -t localhost/my-image:functional-20220221083056-6550 testdata/build: Sending build context to Docker daemon 3.072kB Step 1/3 : FROM gcr.io/k8s-minikube/busybox latest: Pulling from k8s-minikube/busybox 5cc84ad355aa: Pulling fs layer 5cc84ad355aa: Verifying Checksum 5cc84ad355aa: Download complete 5cc84ad355aa: Pull complete Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest ---> beae173ccac6 Step 2/3 : RUN true ---> Running in c94e52448441 Removing intermediate container c94e52448441 ---> 9023514c8e8e Step 3/3 : ADD content.txt / ---> ea5884b8ad43 Successfully built ea5884b8ad43 Successfully tagged localhost/my-image:functional-20220221083056-6550 functional_test.go:444: (dbg) Run: out/minikube-linux-amd64 -p functional-20220221083056-6550 image ls === CONT TestFunctional/parallel/PersistentVolumeClaim functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.006498688s functional_test_pvc_test.go:115: (dbg) Run: kubectl --context functional-20220221083056-6550 exec sp-pod -- ls /tmp/mount === RUN TestFunctional/delete_addon-resizer_images functional_test.go:186: (dbg) Run: docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8 functional_test.go:186: (dbg) Run: docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220221083056-6550 === RUN TestFunctional/delete_my-image_image functional_test.go:194: (dbg) Run: docker rmi -f localhost/my-image:functional-20220221083056-6550 === RUN TestFunctional/delete_minikube_cached_images functional_test.go:202: (dbg) Run: docker rmi -f minikube-local-cache-test:functional-20220221083056-6550 === CONT TestFunctional helpers_test.go:176: Cleaning up "functional-20220221083056-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p functional-20220221083056-6550 helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p functional-20220221083056-6550: (2.783811623s) --- PASS: TestFunctional (142.62s) --- PASS: TestFunctional/serial (93.95s) --- PASS: TestFunctional/serial/CopySyncFile (0.00s) --- PASS: TestFunctional/serial/StartWithProxy (42.73s) --- PASS: TestFunctional/serial/AuditLog (0.00s) --- PASS: TestFunctional/serial/SoftStart (5.60s) --- PASS: TestFunctional/serial/KubeContext (0.03s) --- PASS: TestFunctional/serial/KubectlGetPods (0.17s) --- PASS: TestFunctional/serial/CacheCmd (14.45s) --- PASS: TestFunctional/serial/CacheCmd/cache (14.45s) --- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (8.28s) --- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.65s) --- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s) --- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s) --- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.47s) --- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.81s) --- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s) --- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s) --- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s) --- PASS: TestFunctional/serial/ExtraConfig (28.10s) --- PASS: TestFunctional/serial/ComponentHealth (0.06s) --- PASS: TestFunctional/serial/LogsCmd (1.30s) --- PASS: TestFunctional/serial/LogsFileCmd (1.28s) --- PASS: TestFunctional/parallel (0.00s) --- PASS: TestFunctional/parallel/ConfigCmd (0.43s) --- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.38s) --- PASS: TestFunctional/parallel/ProfileCmd (1.51s) --- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s) --- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.50s) --- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s) --- PASS: TestFunctional/parallel/CpCmd (1.51s) --- SKIP: TestFunctional/parallel/PodmanEnv (0.00s) --- PASS: TestFunctional/parallel/NodeLabels (0.06s) --- PASS: TestFunctional/parallel/DockerEnv (1.30s) --- PASS: TestFunctional/parallel/DockerEnv/bash (1.30s) --- PASS: TestFunctional/parallel/FileSync (0.42s) --- PASS: TestFunctional/parallel/CertSync (2.34s) --- PASS: TestFunctional/parallel/AddonsCmd (0.26s) --- PASS: TestFunctional/parallel/SSHCmd (0.74s) --- PASS: TestFunctional/parallel/ServiceCmd (24.18s) --- PASS: TestFunctional/parallel/InternationalLanguage (0.22s) --- PASS: TestFunctional/parallel/MySQL (27.81s) --- PASS: TestFunctional/parallel/StatusCmd (1.44s) --- PASS: TestFunctional/parallel/DryRun (0.51s) --- PASS: TestFunctional/parallel/DashboardCmd (3.69s) --- PASS: TestFunctional/parallel/Version (0.00s) --- PASS: TestFunctional/parallel/Version/short (0.06s) --- PASS: TestFunctional/parallel/Version/components (1.17s) --- PASS: TestFunctional/parallel/UpdateContextCmd (0.00s) --- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s) --- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s) --- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s) --- PASS: TestFunctional/parallel/TunnelCmd (14.39s) --- PASS: TestFunctional/parallel/TunnelCmd/serial (14.39s) --- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s) --- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService (14.27s) --- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.22s) --- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s) --- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s) --- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s) --- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s) --- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s) --- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s) --- PASS: TestFunctional/parallel/MountCmd (10.01s) --- PASS: TestFunctional/parallel/MountCmd/any-port (7.57s) --- PASS: TestFunctional/parallel/MountCmd/specific-port (2.44s) --- PASS: TestFunctional/parallel/ImageCommands (23.74s) --- PASS: TestFunctional/parallel/ImageCommands/Setup (2.64s) --- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.71s) --- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.20s) --- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.49s) --- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.12s) --- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.67s) --- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.19s) --- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.72s) --- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s) --- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s) --- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s) --- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s) --- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.04s) --- PASS: TestFunctional/parallel/PersistentVolumeClaim (45.73s) --- PASS: TestFunctional/delete_addon-resizer_images (0.10s) --- PASS: TestFunctional/delete_my-image_image (0.03s) --- PASS: TestFunctional/delete_minikube_cached_images (0.03s) === RUN TestGvisorAddon gvisor_addon_test.go:35: skipping test because --gvisor=false --- SKIP: TestGvisorAddon (0.00s) === RUN TestIngressAddonLegacy === RUN TestIngressAddonLegacy/StartLegacyK8sCluster ingress_addon_legacy_test.go:40: (dbg) Run: out/minikube-linux-amd64 start -p ingress-addon-legacy-20220221083319-6550 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker --container-runtime=docker E0221 08:33:29.173867 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory E0221 08:33:29.179430 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory E0221 08:33:29.189715 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory E0221 08:33:29.209998 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory E0221 08:33:29.250349 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory E0221 08:33:29.331096 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory E0221 08:33:29.491485 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory E0221 08:33:29.812129 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory E0221 08:33:30.453076 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory E0221 08:33:31.733298 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory E0221 08:33:34.295065 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory E0221 08:33:39.415918 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory E0221 08:33:49.656769 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory E0221 08:34:10.137717 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory ingress_addon_legacy_test.go:40: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-20220221083319-6550 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker --container-runtime=docker: (56.351456358s) === RUN TestIngressAddonLegacy/serial === RUN TestIngressAddonLegacy/serial/ValidateIngressAddonActivation ingress_addon_legacy_test.go:71: (dbg) Run: out/minikube-linux-amd64 -p ingress-addon-legacy-20220221083319-6550 addons enable ingress --alsologtostderr -v=5 ingress_addon_legacy_test.go:71: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220221083319-6550 addons enable ingress --alsologtostderr -v=5: (17.154201568s) === RUN TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation ingress_addon_legacy_test.go:80: (dbg) Run: out/minikube-linux-amd64 -p ingress-addon-legacy-20220221083319-6550 addons enable ingress-dns --alsologtostderr -v=5 === RUN TestIngressAddonLegacy/serial/ValidateIngressAddons addons_test.go:163: (dbg) Run: kubectl --context ingress-addon-legacy-20220221083319-6550 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s addons_test.go:163: (dbg) Done: kubectl --context ingress-addon-legacy-20220221083319-6550 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (15.546885035s) addons_test.go:183: (dbg) Run: kubectl --context ingress-addon-legacy-20220221083319-6550 replace --force -f testdata/nginx-ingress-v1beta1.yaml addons_test.go:196: (dbg) Run: kubectl --context ingress-addon-legacy-20220221083319-6550 replace --force -f testdata/nginx-pod-svc.yaml addons_test.go:201: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ... helpers_test.go:343: "nginx" [7a4afd79-d1a4-480a-8520-2efa86fc7de1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx]) E0221 08:34:51.098131 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory helpers_test.go:343: "nginx" [7a4afd79-d1a4-480a-8520-2efa86fc7de1] Running addons_test.go:201: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 12.005882201s addons_test.go:213: (dbg) Run: out/minikube-linux-amd64 -p ingress-addon-legacy-20220221083319-6550 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'" addons_test.go:237: (dbg) Run: kubectl --context ingress-addon-legacy-20220221083319-6550 replace --force -f testdata/ingress-dns-example-v1beta1.yaml addons_test.go:242: (dbg) Run: out/minikube-linux-amd64 -p ingress-addon-legacy-20220221083319-6550 ip addons_test.go:248: (dbg) Run: nslookup hello-john.test 192.168.49.2 addons_test.go:257: (dbg) Run: out/minikube-linux-amd64 -p ingress-addon-legacy-20220221083319-6550 addons disable ingress-dns --alsologtostderr -v=1 addons_test.go:257: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220221083319-6550 addons disable ingress-dns --alsologtostderr -v=1: (1.883277796s) addons_test.go:262: (dbg) Run: out/minikube-linux-amd64 -p ingress-addon-legacy-20220221083319-6550 addons disable ingress --alsologtostderr -v=1 addons_test.go:262: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220221083319-6550 addons disable ingress --alsologtostderr -v=1: (7.361315494s) === CONT TestIngressAddonLegacy helpers_test.go:176: Cleaning up "ingress-addon-legacy-20220221083319-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p ingress-addon-legacy-20220221083319-6550 helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p ingress-addon-legacy-20220221083319-6550: (2.77177148s) --- PASS: TestIngressAddonLegacy (114.78s) --- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (56.35s) --- PASS: TestIngressAddonLegacy/serial (55.66s) --- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.15s) --- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.40s) --- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (38.10s) === RUN TestJSONOutput === RUN TestJSONOutput/start === RUN TestJSONOutput/start/Command json_output_test.go:63: (dbg) Run: out/minikube-linux-amd64 start -p json-output-20220221083514-6550 --output=json --user=testUser --memory=2200 --wait=true --driver=docker --container-runtime=docker json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-20220221083514-6550 --output=json --user=testUser --memory=2200 --wait=true --driver=docker --container-runtime=docker: (44.03886183s) === RUN TestJSONOutput/start/Audit === RUN TestJSONOutput/start/parallel === RUN TestJSONOutput/start/parallel/DistinctCurrentSteps === PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps === RUN TestJSONOutput/start/parallel/IncreasingCurrentSteps === PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps === CONT TestJSONOutput/start/parallel/DistinctCurrentSteps === CONT TestJSONOutput/start/parallel/IncreasingCurrentSteps === RUN TestJSONOutput/pause === RUN TestJSONOutput/pause/Command json_output_test.go:63: (dbg) Run: out/minikube-linux-amd64 pause -p json-output-20220221083514-6550 --output=json --user=testUser === RUN TestJSONOutput/pause/Audit === RUN TestJSONOutput/pause/parallel === RUN TestJSONOutput/pause/parallel/DistinctCurrentSteps === PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps === RUN TestJSONOutput/pause/parallel/IncreasingCurrentSteps === PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps === CONT TestJSONOutput/pause/parallel/DistinctCurrentSteps === CONT TestJSONOutput/pause/parallel/IncreasingCurrentSteps === RUN TestJSONOutput/unpause === RUN TestJSONOutput/unpause/Command json_output_test.go:63: (dbg) Run: out/minikube-linux-amd64 unpause -p json-output-20220221083514-6550 --output=json --user=testUser === RUN TestJSONOutput/unpause/Audit === RUN TestJSONOutput/unpause/parallel === RUN TestJSONOutput/unpause/parallel/DistinctCurrentSteps === PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps === RUN TestJSONOutput/unpause/parallel/IncreasingCurrentSteps === PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps === CONT TestJSONOutput/unpause/parallel/DistinctCurrentSteps === CONT TestJSONOutput/unpause/parallel/IncreasingCurrentSteps === RUN TestJSONOutput/stop === RUN TestJSONOutput/stop/Command json_output_test.go:63: (dbg) Run: out/minikube-linux-amd64 stop -p json-output-20220221083514-6550 --output=json --user=testUser json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-20220221083514-6550 --output=json --user=testUser: (10.909934352s) === RUN TestJSONOutput/stop/Audit === RUN TestJSONOutput/stop/parallel === RUN TestJSONOutput/stop/parallel/DistinctCurrentSteps === PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps === RUN TestJSONOutput/stop/parallel/IncreasingCurrentSteps === PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps === CONT TestJSONOutput/stop/parallel/DistinctCurrentSteps === CONT TestJSONOutput/stop/parallel/IncreasingCurrentSteps === CONT TestJSONOutput helpers_test.go:176: Cleaning up "json-output-20220221083514-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p json-output-20220221083514-6550 helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p json-output-20220221083514-6550: (1.890572852s) --- PASS: TestJSONOutput (58.06s) --- PASS: TestJSONOutput/start (44.04s) --- PASS: TestJSONOutput/start/Command (44.04s) --- PASS: TestJSONOutput/start/Audit (0.00s) --- PASS: TestJSONOutput/start/parallel (0.00s) --- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s) --- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s) --- PASS: TestJSONOutput/pause (0.66s) --- PASS: TestJSONOutput/pause/Command (0.66s) --- PASS: TestJSONOutput/pause/Audit (0.00s) --- PASS: TestJSONOutput/pause/parallel (0.00s) --- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s) --- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s) --- PASS: TestJSONOutput/unpause (0.57s) --- PASS: TestJSONOutput/unpause/Command (0.56s) --- PASS: TestJSONOutput/unpause/Audit (0.00s) --- PASS: TestJSONOutput/unpause/parallel (0.00s) --- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s) --- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s) --- PASS: TestJSONOutput/stop (10.91s) --- PASS: TestJSONOutput/stop/Command (10.91s) --- PASS: TestJSONOutput/stop/Audit (0.00s) --- PASS: TestJSONOutput/stop/parallel (0.00s) --- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s) --- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s) === RUN TestErrorJSONOutput json_output_test.go:149: (dbg) Run: out/minikube-linux-amd64 start -p json-output-error-20220221083612-6550 --memory=2200 --output=json --wait=true --driver=fail json_output_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20220221083612-6550 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (65.735478ms) -- stdout -- {"specversion":"1.0","id":"2ff00846-ac27-4f7c-ae9b-e59c1c1c9e2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220221083612-6550] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}} {"specversion":"1.0","id":"8364300f-7b86-4b3f-bc3b-6b5b8d6a7f61","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=13641"}} {"specversion":"1.0","id":"db162834-5758-4600-82fc-09fd88707978","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}} {"specversion":"1.0","id":"c36fe10b-aac4-4c30-96b3-049a72498e73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig"}} {"specversion":"1.0","id":"e71e9bcc-bda6-4bbb-ab53-9e53f74711b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube"}} {"specversion":"1.0","id":"51d14628-55eb-4e87-9312-5353cc05c477","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}} {"specversion":"1.0","id":"9b819a67-ebd7-436c-9f3d-1912fe3a1c1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}} -- /stdout -- helpers_test.go:176: Cleaning up "json-output-error-20220221083612-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p json-output-error-20220221083612-6550 --- PASS: TestErrorJSONOutput (0.29s) === RUN TestKicCustomNetwork === RUN TestKicCustomNetwork/create_custom_network kic_custom_network_test.go:58: (dbg) Run: out/minikube-linux-amd64 start -p docker-network-20220221083612-6550 --network= E0221 08:36:13.019323 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory kic_custom_network_test.go:58: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20220221083612-6550 --network=: (26.720094217s) kic_custom_network_test.go:102: (dbg) Run: docker network ls --format {{.Name}} helpers_test.go:176: Cleaning up "docker-network-20220221083612-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p docker-network-20220221083612-6550 helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20220221083612-6550: (2.340929885s) === RUN TestKicCustomNetwork/use_default_bridge_network kic_custom_network_test.go:58: (dbg) Run: out/minikube-linux-amd64 start -p docker-network-20220221083641-6550 --network=bridge kic_custom_network_test.go:58: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20220221083641-6550 --network=bridge: (26.920226614s) kic_custom_network_test.go:102: (dbg) Run: docker network ls --format {{.Name}} helpers_test.go:176: Cleaning up "docker-network-20220221083641-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p docker-network-20220221083641-6550 helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20220221083641-6550: (2.125343485s) --- PASS: TestKicCustomNetwork (58.17s) --- PASS: TestKicCustomNetwork/create_custom_network (29.10s) --- PASS: TestKicCustomNetwork/use_default_bridge_network (29.08s) === RUN TestKicExistingNetwork kic_custom_network_test.go:102: (dbg) Run: docker network ls --format {{.Name}} kic_custom_network_test.go:94: (dbg) Run: out/minikube-linux-amd64 start -p existing-network-20220221083710-6550 --network=existing-network E0221 08:37:30.568392 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory E0221 08:37:30.573642 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory E0221 08:37:30.583911 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory E0221 08:37:30.604156 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory E0221 08:37:30.644394 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory E0221 08:37:30.724701 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory E0221 08:37:30.885153 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory E0221 08:37:31.205675 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory E0221 08:37:31.846197 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory E0221 08:37:33.126616 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory E0221 08:37:35.687737 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory kic_custom_network_test.go:94: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-20220221083710-6550 --network=existing-network: (27.340692947s) helpers_test.go:176: Cleaning up "existing-network-20220221083710-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p existing-network-20220221083710-6550 helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-20220221083710-6550: (2.328754275s) --- PASS: TestKicExistingNetwork (29.89s) === RUN TestMainNoArgs main_test.go:68: (dbg) Run: out/minikube-linux-amd64 --- PASS: TestMainNoArgs (0.06s) === RUN TestMountStart === RUN TestMountStart/serial === RUN TestMountStart/serial/StartWithMountFirst mount_start_test.go:99: (dbg) Run: out/minikube-linux-amd64 start -p mount-start-1-20220221083740-6550 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker --container-runtime=docker E0221 08:37:40.808660 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory mount_start_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-20220221083740-6550 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker --container-runtime=docker: (4.767347382s) === RUN TestMountStart/serial/VerifyMountFirst mount_start_test.go:115: (dbg) Run: out/minikube-linux-amd64 -p mount-start-1-20220221083740-6550 ssh -- ls /minikube-host === RUN TestMountStart/serial/StartWithMountSecond mount_start_test.go:99: (dbg) Run: out/minikube-linux-amd64 start -p mount-start-2-20220221083740-6550 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker --container-runtime=docker E0221 08:37:51.049843 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory mount_start_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220221083740-6550 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker --container-runtime=docker: (4.81840053s) === RUN TestMountStart/serial/VerifyMountSecond mount_start_test.go:115: (dbg) Run: out/minikube-linux-amd64 -p mount-start-2-20220221083740-6550 ssh -- ls /minikube-host === RUN TestMountStart/serial/DeleteFirst pause_test.go:133: (dbg) Run: out/minikube-linux-amd64 delete -p mount-start-1-20220221083740-6550 --alsologtostderr -v=5 pause_test.go:133: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-20220221083740-6550 --alsologtostderr -v=5: (1.750220679s) === RUN TestMountStart/serial/VerifyMountPostDelete mount_start_test.go:115: (dbg) Run: out/minikube-linux-amd64 -p mount-start-2-20220221083740-6550 ssh -- ls /minikube-host === RUN TestMountStart/serial/Stop mount_start_test.go:156: (dbg) Run: out/minikube-linux-amd64 stop -p mount-start-2-20220221083740-6550 mount_start_test.go:156: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-20220221083740-6550: (1.271263344s) === RUN TestMountStart/serial/RestartStopped mount_start_test.go:167: (dbg) Run: out/minikube-linux-amd64 start -p mount-start-2-20220221083740-6550 mount_start_test.go:167: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220221083740-6550: (5.971122248s) === RUN TestMountStart/serial/VerifyMountPostStop mount_start_test.go:115: (dbg) Run: out/minikube-linux-amd64 -p mount-start-2-20220221083740-6550 ssh -- ls /minikube-host === CONT TestMountStart helpers_test.go:176: Cleaning up "mount-start-2-20220221083740-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p mount-start-2-20220221083740-6550 helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-2-20220221083740-6550: (1.66788262s) helpers_test.go:176: Cleaning up "mount-start-1-20220221083740-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p mount-start-1-20220221083740-6550 --- PASS: TestMountStart (24.80s) --- PASS: TestMountStart/serial (22.91s) --- PASS: TestMountStart/serial/StartWithMountFirst (5.77s) --- PASS: TestMountStart/serial/VerifyMountFirst (0.33s) --- PASS: TestMountStart/serial/StartWithMountSecond (5.82s) --- PASS: TestMountStart/serial/VerifyMountSecond (0.34s) --- PASS: TestMountStart/serial/DeleteFirst (1.75s) --- PASS: TestMountStart/serial/VerifyMountPostDelete (0.33s) --- PASS: TestMountStart/serial/Stop (1.27s) --- PASS: TestMountStart/serial/RestartStopped (6.97s) --- PASS: TestMountStart/serial/VerifyMountPostStop (0.33s) === RUN TestMultiNode === RUN TestMultiNode/serial === RUN TestMultiNode/serial/FreshStart2Nodes multinode_test.go:86: (dbg) Run: out/minikube-linux-amd64 start -p multinode-20220221083805-6550 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker --container-runtime=docker E0221 08:38:11.530782 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory E0221 08:38:29.174379 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory E0221 08:38:52.491968 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory E0221 08:38:56.860358 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220221083805-6550 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker --container-runtime=docker: (1m25.52919136s) multinode_test.go:92: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 status --alsologtostderr === RUN TestMultiNode/serial/DeployApp2Nodes multinode_test.go:486: (dbg) Run: out/minikube-linux-amd64 kubectl -p multinode-20220221083805-6550 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml multinode_test.go:491: (dbg) Run: out/minikube-linux-amd64 kubectl -p multinode-20220221083805-6550 -- rollout status deployment/busybox E0221 08:39:33.149332 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory E0221 08:39:33.154593 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory E0221 08:39:33.164842 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory E0221 08:39:33.185114 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory E0221 08:39:33.225440 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory E0221 08:39:33.305757 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory E0221 08:39:33.466032 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory E0221 08:39:33.786580 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory E0221 08:39:34.427510 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory multinode_test.go:491: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20220221083805-6550 -- rollout status deployment/busybox: (3.291947725s) multinode_test.go:497: (dbg) Run: out/minikube-linux-amd64 kubectl -p multinode-20220221083805-6550 -- get pods -o jsonpath='{.items[*].status.podIP}' multinode_test.go:509: (dbg) Run: out/minikube-linux-amd64 kubectl -p multinode-20220221083805-6550 -- get pods -o jsonpath='{.items[*].metadata.name}' multinode_test.go:517: (dbg) Run: out/minikube-linux-amd64 kubectl -p multinode-20220221083805-6550 -- exec busybox-7978565885-6dg6b -- nslookup kubernetes.io E0221 08:39:35.708189 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory multinode_test.go:517: (dbg) Run: out/minikube-linux-amd64 kubectl -p multinode-20220221083805-6550 -- exec busybox-7978565885-pxmsk -- nslookup kubernetes.io multinode_test.go:527: (dbg) Run: out/minikube-linux-amd64 kubectl -p multinode-20220221083805-6550 -- exec busybox-7978565885-6dg6b -- nslookup kubernetes.default multinode_test.go:527: (dbg) Run: out/minikube-linux-amd64 kubectl -p multinode-20220221083805-6550 -- exec busybox-7978565885-pxmsk -- nslookup kubernetes.default multinode_test.go:535: (dbg) Run: out/minikube-linux-amd64 kubectl -p multinode-20220221083805-6550 -- exec busybox-7978565885-6dg6b -- nslookup kubernetes.default.svc.cluster.local multinode_test.go:535: (dbg) Run: out/minikube-linux-amd64 kubectl -p multinode-20220221083805-6550 -- exec busybox-7978565885-pxmsk -- nslookup kubernetes.default.svc.cluster.local === RUN TestMultiNode/serial/PingHostFrom2Pods multinode_test.go:545: (dbg) Run: out/minikube-linux-amd64 kubectl -p multinode-20220221083805-6550 -- get pods -o jsonpath='{.items[*].metadata.name}' multinode_test.go:553: (dbg) Run: out/minikube-linux-amd64 kubectl -p multinode-20220221083805-6550 -- exec busybox-7978565885-6dg6b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3" multinode_test.go:561: (dbg) Run: out/minikube-linux-amd64 kubectl -p multinode-20220221083805-6550 -- exec busybox-7978565885-6dg6b -- sh -c "ping -c 1 192.168.49.1" multinode_test.go:553: (dbg) Run: out/minikube-linux-amd64 kubectl -p multinode-20220221083805-6550 -- exec busybox-7978565885-pxmsk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3" multinode_test.go:561: (dbg) Run: out/minikube-linux-amd64 kubectl -p multinode-20220221083805-6550 -- exec busybox-7978565885-pxmsk -- sh -c "ping -c 1 192.168.49.1" === RUN TestMultiNode/serial/AddNode multinode_test.go:111: (dbg) Run: out/minikube-linux-amd64 node add -p multinode-20220221083805-6550 -v 3 --alsologtostderr E0221 08:39:38.268990 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory E0221 08:39:43.389434 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory E0221 08:39:53.629578 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20220221083805-6550 -v 3 --alsologtostderr: (27.497783892s) multinode_test.go:117: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 status --alsologtostderr === RUN TestMultiNode/serial/ProfileList multinode_test.go:133: (dbg) Run: out/minikube-linux-amd64 profile list --output json === RUN TestMultiNode/serial/CopyFile multinode_test.go:174: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 status --output json --alsologtostderr helpers_test.go:555: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 cp testdata/cp-test.txt multinode-20220221083805-6550:/home/docker/cp-test.txt helpers_test.go:533: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 ssh -n multinode-20220221083805-6550 "sudo cat /home/docker/cp-test.txt" helpers_test.go:555: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 cp multinode-20220221083805-6550:/home/docker/cp-test.txt /tmp/mk_cp_test2552639775/cp-test_multinode-20220221083805-6550.txt helpers_test.go:533: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 ssh -n multinode-20220221083805-6550 "sudo cat /home/docker/cp-test.txt" helpers_test.go:555: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 cp multinode-20220221083805-6550:/home/docker/cp-test.txt multinode-20220221083805-6550-m02:/home/docker/cp-test_multinode-20220221083805-6550_multinode-20220221083805-6550-m02.txt helpers_test.go:533: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 ssh -n multinode-20220221083805-6550 "sudo cat /home/docker/cp-test.txt" helpers_test.go:533: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 ssh -n multinode-20220221083805-6550-m02 "sudo cat /home/docker/cp-test_multinode-20220221083805-6550_multinode-20220221083805-6550-m02.txt" helpers_test.go:555: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 cp multinode-20220221083805-6550:/home/docker/cp-test.txt multinode-20220221083805-6550-m03:/home/docker/cp-test_multinode-20220221083805-6550_multinode-20220221083805-6550-m03.txt helpers_test.go:533: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 ssh -n multinode-20220221083805-6550 "sudo cat /home/docker/cp-test.txt" helpers_test.go:533: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 ssh -n multinode-20220221083805-6550-m03 "sudo cat /home/docker/cp-test_multinode-20220221083805-6550_multinode-20220221083805-6550-m03.txt" helpers_test.go:555: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 cp testdata/cp-test.txt multinode-20220221083805-6550-m02:/home/docker/cp-test.txt helpers_test.go:533: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 ssh -n multinode-20220221083805-6550-m02 "sudo cat /home/docker/cp-test.txt" helpers_test.go:555: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 cp multinode-20220221083805-6550-m02:/home/docker/cp-test.txt /tmp/mk_cp_test2552639775/cp-test_multinode-20220221083805-6550-m02.txt helpers_test.go:533: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 ssh -n multinode-20220221083805-6550-m02 "sudo cat /home/docker/cp-test.txt" helpers_test.go:555: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 cp multinode-20220221083805-6550-m02:/home/docker/cp-test.txt multinode-20220221083805-6550:/home/docker/cp-test_multinode-20220221083805-6550-m02_multinode-20220221083805-6550.txt helpers_test.go:533: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 ssh -n multinode-20220221083805-6550-m02 "sudo cat /home/docker/cp-test.txt" helpers_test.go:533: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 ssh -n multinode-20220221083805-6550 "sudo cat /home/docker/cp-test_multinode-20220221083805-6550-m02_multinode-20220221083805-6550.txt" helpers_test.go:555: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 cp multinode-20220221083805-6550-m02:/home/docker/cp-test.txt multinode-20220221083805-6550-m03:/home/docker/cp-test_multinode-20220221083805-6550-m02_multinode-20220221083805-6550-m03.txt helpers_test.go:533: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 ssh -n multinode-20220221083805-6550-m02 "sudo cat /home/docker/cp-test.txt" E0221 08:40:14.109781 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory helpers_test.go:533: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 ssh -n multinode-20220221083805-6550-m03 "sudo cat /home/docker/cp-test_multinode-20220221083805-6550-m02_multinode-20220221083805-6550-m03.txt" E0221 08:40:14.412616 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory helpers_test.go:555: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 cp testdata/cp-test.txt multinode-20220221083805-6550-m03:/home/docker/cp-test.txt helpers_test.go:533: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 ssh -n multinode-20220221083805-6550-m03 "sudo cat /home/docker/cp-test.txt" helpers_test.go:555: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 cp multinode-20220221083805-6550-m03:/home/docker/cp-test.txt /tmp/mk_cp_test2552639775/cp-test_multinode-20220221083805-6550-m03.txt helpers_test.go:533: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 ssh -n multinode-20220221083805-6550-m03 "sudo cat /home/docker/cp-test.txt" helpers_test.go:555: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 cp multinode-20220221083805-6550-m03:/home/docker/cp-test.txt multinode-20220221083805-6550:/home/docker/cp-test_multinode-20220221083805-6550-m03_multinode-20220221083805-6550.txt helpers_test.go:533: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 ssh -n multinode-20220221083805-6550-m03 "sudo cat /home/docker/cp-test.txt" helpers_test.go:533: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 ssh -n multinode-20220221083805-6550 "sudo cat /home/docker/cp-test_multinode-20220221083805-6550-m03_multinode-20220221083805-6550.txt" helpers_test.go:555: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 cp multinode-20220221083805-6550-m03:/home/docker/cp-test.txt multinode-20220221083805-6550-m02:/home/docker/cp-test_multinode-20220221083805-6550-m03_multinode-20220221083805-6550-m02.txt helpers_test.go:533: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 ssh -n multinode-20220221083805-6550-m03 "sudo cat /home/docker/cp-test.txt" helpers_test.go:533: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 ssh -n multinode-20220221083805-6550-m02 "sudo cat /home/docker/cp-test_multinode-20220221083805-6550-m03_multinode-20220221083805-6550-m02.txt" === RUN TestMultiNode/serial/StopNode multinode_test.go:215: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 node stop m03 multinode_test.go:215: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220221083805-6550 node stop m03: (1.276479943s) multinode_test.go:221: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 status multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220221083805-6550 status: exit status 7 (621.77455ms) -- stdout -- multinode-20220221083805-6550 type: Control Plane host: Running kubelet: Running apiserver: Running kubeconfig: Configured multinode-20220221083805-6550-m02 type: Worker host: Running kubelet: Running multinode-20220221083805-6550-m03 type: Worker host: Stopped kubelet: Stopped -- /stdout -- multinode_test.go:228: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 status --alsologtostderr multinode_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220221083805-6550 status --alsologtostderr: exit status 7 (626.323283ms) -- stdout -- multinode-20220221083805-6550 type: Control Plane host: Running kubelet: Running apiserver: Running kubeconfig: Configured multinode-20220221083805-6550-m02 type: Worker host: Running kubelet: Running multinode-20220221083805-6550-m03 type: Worker host: Stopped kubelet: Stopped -- /stdout -- ** stderr ** I0221 08:40:20.278188 93035 out.go:297] Setting OutFile to fd 1 ... I0221 08:40:20.278711 93035 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 08:40:20.278724 93035 out.go:310] Setting ErrFile to fd 2... I0221 08:40:20.278731 93035 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 08:40:20.278986 93035 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/bin I0221 08:40:20.279265 93035 out.go:304] Setting JSON to false I0221 08:40:20.279284 93035 mustload.go:65] Loading cluster: multinode-20220221083805-6550 I0221 08:40:20.279975 93035 config.go:176] Loaded profile config "multinode-20220221083805-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 08:40:20.279996 93035 status.go:253] checking status of multinode-20220221083805-6550 ... I0221 08:40:20.280398 93035 cli_runner.go:133] Run: docker container inspect multinode-20220221083805-6550 --format={{.State.Status}} I0221 08:40:20.313486 93035 status.go:328] multinode-20220221083805-6550 host status = "Running" (err=) I0221 08:40:20.313517 93035 host.go:66] Checking if "multinode-20220221083805-6550" exists ... I0221 08:40:20.313768 93035 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220221083805-6550 I0221 08:40:20.346034 93035 host.go:66] Checking if "multinode-20220221083805-6550" exists ... I0221 08:40:20.346311 93035 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0221 08:40:20.346350 93035 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220221083805-6550 I0221 08:40:20.379881 93035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49212 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/multinode-20220221083805-6550/id_rsa Username:docker} I0221 08:40:20.468022 93035 ssh_runner.go:195] Run: systemctl --version I0221 08:40:20.471790 93035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0221 08:40:20.480879 93035 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 08:40:20.569876 93035 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-02-21 08:40:20.510577488 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 08:40:20.570802 93035 kubeconfig.go:92] found "multinode-20220221083805-6550" server: "https://192.168.49.2:8443" I0221 08:40:20.570824 93035 api_server.go:165] Checking apiserver status ... I0221 08:40:20.570851 93035 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0221 08:40:20.590679 93035 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1717/cgroup I0221 08:40:20.598113 93035 api_server.go:181] apiserver freezer: "9:freezer:/docker/a10990c235c4a56bc8a10787c5238205d1b01fe9300339ebfb3dfeebd8121c25/kubepods/burstable/pod8145f90dc270d9683ad72fcdce51fc35/a1b54a96554a324ea7654d9d90d70e9a6001f2fb6ba0160345df4a080bbdd228" I0221 08:40:20.598188 93035 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a10990c235c4a56bc8a10787c5238205d1b01fe9300339ebfb3dfeebd8121c25/kubepods/burstable/pod8145f90dc270d9683ad72fcdce51fc35/a1b54a96554a324ea7654d9d90d70e9a6001f2fb6ba0160345df4a080bbdd228/freezer.state I0221 08:40:20.604501 93035 api_server.go:203] freezer state: "THAWED" I0221 08:40:20.604528 93035 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0221 08:40:20.609206 93035 api_server.go:266] https://192.168.49.2:8443/healthz returned 200: ok I0221 08:40:20.609234 93035 status.go:419] multinode-20220221083805-6550 apiserver status = Running (err=) I0221 08:40:20.609243 93035 status.go:255] multinode-20220221083805-6550 status: &{Name:multinode-20220221083805-6550 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:} I0221 08:40:20.609259 93035 status.go:253] checking status of multinode-20220221083805-6550-m02 ... I0221 08:40:20.609533 93035 cli_runner.go:133] Run: docker container inspect multinode-20220221083805-6550-m02 --format={{.State.Status}} I0221 08:40:20.642478 93035 status.go:328] multinode-20220221083805-6550-m02 host status = "Running" (err=) I0221 08:40:20.642508 93035 host.go:66] Checking if "multinode-20220221083805-6550-m02" exists ... I0221 08:40:20.642737 93035 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220221083805-6550-m02 I0221 08:40:20.675407 93035 host.go:66] Checking if "multinode-20220221083805-6550-m02" exists ... I0221 08:40:20.675657 93035 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0221 08:40:20.675696 93035 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220221083805-6550-m02 I0221 08:40:20.709687 93035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49217 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/multinode-20220221083805-6550-m02/id_rsa Username:docker} I0221 08:40:20.799483 93035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0221 08:40:20.808477 93035 status.go:255] multinode-20220221083805-6550-m02 status: &{Name:multinode-20220221083805-6550-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:} I0221 08:40:20.808529 93035 status.go:253] checking status of multinode-20220221083805-6550-m03 ... I0221 08:40:20.808793 93035 cli_runner.go:133] Run: docker container inspect multinode-20220221083805-6550-m03 --format={{.State.Status}} I0221 08:40:20.842951 93035 status.go:328] multinode-20220221083805-6550-m03 host status = "Stopped" (err=) I0221 08:40:20.842974 93035 status.go:341] host is not running, skipping remaining checks I0221 08:40:20.842979 93035 status.go:255] multinode-20220221083805-6550-m03 status: &{Name:multinode-20220221083805-6550-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:} ** /stderr ** === RUN TestMultiNode/serial/StartAfterStop multinode_test.go:249: (dbg) Run: docker version -f {{.Server.Version}} multinode_test.go:259: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 node start m03 --alsologtostderr multinode_test.go:259: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220221083805-6550 node start m03 --alsologtostderr: (23.788507041s) multinode_test.go:266: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 status multinode_test.go:280: (dbg) Run: kubectl get nodes === RUN TestMultiNode/serial/RestartKeepsNodes multinode_test.go:288: (dbg) Run: out/minikube-linux-amd64 node list -p multinode-20220221083805-6550 multinode_test.go:295: (dbg) Run: out/minikube-linux-amd64 stop -p multinode-20220221083805-6550 E0221 08:40:55.071150 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-20220221083805-6550: (22.60284662s) multinode_test.go:300: (dbg) Run: out/minikube-linux-amd64 start -p multinode-20220221083805-6550 --wait=true -v=8 --alsologtostderr E0221 08:42:16.991866 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory multinode_test.go:300: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220221083805-6550 --wait=true -v=8 --alsologtostderr: (1m20.885119297s) multinode_test.go:305: (dbg) Run: out/minikube-linux-amd64 node list -p multinode-20220221083805-6550 === RUN TestMultiNode/serial/DeleteNode multinode_test.go:399: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 node delete m03 E0221 08:42:30.568483 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory multinode_test.go:399: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220221083805-6550 node delete m03: (4.644221911s) multinode_test.go:405: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 status --alsologtostderr multinode_test.go:419: (dbg) Run: docker volume ls multinode_test.go:429: (dbg) Run: kubectl get nodes multinode_test.go:437: (dbg) Run: kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'" === RUN TestMultiNode/serial/StopMultiNode multinode_test.go:319: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 stop multinode_test.go:319: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220221083805-6550 stop: (21.438919865s) multinode_test.go:325: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 status multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220221083805-6550 status: exit status 7 (124.92685ms) -- stdout -- multinode-20220221083805-6550 type: Control Plane host: Stopped kubelet: Stopped apiserver: Stopped kubeconfig: Stopped multinode-20220221083805-6550-m02 type: Worker host: Stopped kubelet: Stopped -- /stdout -- multinode_test.go:332: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 status --alsologtostderr multinode_test.go:332: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220221083805-6550 status --alsologtostderr: exit status 7 (122.29282ms) -- stdout -- multinode-20220221083805-6550 type: Control Plane host: Stopped kubelet: Stopped apiserver: Stopped kubeconfig: Stopped multinode-20220221083805-6550-m02 type: Worker host: Stopped kubelet: Stopped -- /stdout -- ** stderr ** I0221 08:42:56.100608 106605 out.go:297] Setting OutFile to fd 1 ... I0221 08:42:56.100688 106605 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 08:42:56.100693 106605 out.go:310] Setting ErrFile to fd 2... I0221 08:42:56.100699 106605 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 08:42:56.100817 106605 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/bin I0221 08:42:56.101013 106605 out.go:304] Setting JSON to false I0221 08:42:56.101032 106605 mustload.go:65] Loading cluster: multinode-20220221083805-6550 I0221 08:42:56.101412 106605 config.go:176] Loaded profile config "multinode-20220221083805-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 08:42:56.101434 106605 status.go:253] checking status of multinode-20220221083805-6550 ... I0221 08:42:56.101848 106605 cli_runner.go:133] Run: docker container inspect multinode-20220221083805-6550 --format={{.State.Status}} I0221 08:42:56.133159 106605 status.go:328] multinode-20220221083805-6550 host status = "Stopped" (err=) I0221 08:42:56.133183 106605 status.go:341] host is not running, skipping remaining checks I0221 08:42:56.133189 106605 status.go:255] multinode-20220221083805-6550 status: &{Name:multinode-20220221083805-6550 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:} I0221 08:42:56.133210 106605 status.go:253] checking status of multinode-20220221083805-6550-m02 ... I0221 08:42:56.133471 106605 cli_runner.go:133] Run: docker container inspect multinode-20220221083805-6550-m02 --format={{.State.Status}} I0221 08:42:56.164963 106605 status.go:328] multinode-20220221083805-6550-m02 host status = "Stopped" (err=) I0221 08:42:56.164983 106605 status.go:341] host is not running, skipping remaining checks I0221 08:42:56.164989 106605 status.go:255] multinode-20220221083805-6550-m02 status: &{Name:multinode-20220221083805-6550-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:} ** /stderr ** === RUN TestMultiNode/serial/RestartMultiNode multinode_test.go:349: (dbg) Run: docker version -f {{.Server.Version}} multinode_test.go:359: (dbg) Run: out/minikube-linux-amd64 start -p multinode-20220221083805-6550 --wait=true -v=8 --alsologtostderr --driver=docker --container-runtime=docker E0221 08:42:58.252870 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory E0221 08:43:29.174124 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory multinode_test.go:359: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220221083805-6550 --wait=true -v=8 --alsologtostderr --driver=docker --container-runtime=docker: (59.246707808s) multinode_test.go:365: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220221083805-6550 status --alsologtostderr multinode_test.go:379: (dbg) Run: kubectl get nodes multinode_test.go:387: (dbg) Run: kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'" === RUN TestMultiNode/serial/ValidateNameConflict multinode_test.go:448: (dbg) Run: out/minikube-linux-amd64 node list -p multinode-20220221083805-6550 multinode_test.go:457: (dbg) Run: out/minikube-linux-amd64 start -p multinode-20220221083805-6550-m02 --driver=docker --container-runtime=docker multinode_test.go:457: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20220221083805-6550-m02 --driver=docker --container-runtime=docker: exit status 14 (74.777662ms) -- stdout -- * [multinode-20220221083805-6550-m02] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64) - MINIKUBE_LOCATION=13641 - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube - MINIKUBE_BIN=out/minikube-linux-amd64 -- /stdout -- ** stderr ** ! Profile name 'multinode-20220221083805-6550-m02' is duplicated with machine name 'multinode-20220221083805-6550-m02' in profile 'multinode-20220221083805-6550' X Exiting due to MK_USAGE: Profile name should be unique ** /stderr ** multinode_test.go:465: (dbg) Run: out/minikube-linux-amd64 start -p multinode-20220221083805-6550-m03 --driver=docker --container-runtime=docker multinode_test.go:465: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220221083805-6550-m03 --driver=docker --container-runtime=docker: (27.006036804s) multinode_test.go:472: (dbg) Run: out/minikube-linux-amd64 node add -p multinode-20220221083805-6550 multinode_test.go:472: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20220221083805-6550: exit status 80 (351.580979ms) -- stdout -- * Adding node m03 to cluster multinode-20220221083805-6550 -- /stdout -- ** stderr ** X Exiting due to GUEST_NODE_ADD: Node multinode-20220221083805-6550-m03 already exists in multinode-20220221083805-6550-m03 profile * ╭─────────────────────────────────────────────────────────────────────────────────────────────╮ │ │ │ * If the above advice does not help, please let us know: │ │ https://github.com/kubernetes/minikube/issues/new/choose │ │ │ │ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │ │ * Please also attach the following file to the GitHub issue: │ │ * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log │ │ │ ╰─────────────────────────────────────────────────────────────────────────────────────────────╯ ** /stderr ** multinode_test.go:477: (dbg) Run: out/minikube-linux-amd64 delete -p multinode-20220221083805-6550-m03 multinode_test.go:477: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20220221083805-6550-m03: (2.347126652s) === CONT TestMultiNode helpers_test.go:176: Cleaning up "multinode-20220221083805-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p multinode-20220221083805-6550 helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20220221083805-6550: (4.582119893s) --- PASS: TestMultiNode (385.26s) --- PASS: TestMultiNode/serial (380.67s) --- PASS: TestMultiNode/serial/FreshStart2Nodes (86.12s) --- PASS: TestMultiNode/serial/DeployApp2Nodes (5.42s) --- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.83s) --- PASS: TestMultiNode/serial/AddNode (28.28s) --- PASS: TestMultiNode/serial/ProfileList (0.37s) --- PASS: TestMultiNode/serial/CopyFile (12.00s) --- PASS: TestMultiNode/serial/StopNode (2.53s) --- PASS: TestMultiNode/serial/StartAfterStop (24.65s) --- PASS: TestMultiNode/serial/RestartKeepsNodes (103.61s) --- PASS: TestMultiNode/serial/DeleteNode (5.37s) --- PASS: TestMultiNode/serial/StopMultiNode (21.69s) --- PASS: TestMultiNode/serial/RestartMultiNode (59.97s) --- PASS: TestMultiNode/serial/ValidateNameConflict (29.84s) === RUN TestNetworkPlugins === PAUSE TestNetworkPlugins === RUN TestNoKubernetes === PAUSE TestNoKubernetes === RUN TestChangeNoneUser none_test.go:39: Only test none driver. --- SKIP: TestChangeNoneUser (0.00s) === RUN TestPause === PAUSE TestPause === RUN TestPreload preload_test.go:49: (dbg) Run: out/minikube-linux-amd64 start -p test-preload-20220221084430-6550 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --container-runtime=docker --kubernetes-version=v1.17.0 E0221 08:44:33.148477 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory E0221 08:45:00.832781 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220221084430-6550 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --container-runtime=docker --kubernetes-version=v1.17.0: (1m19.649106222s) preload_test.go:62: (dbg) Run: out/minikube-linux-amd64 ssh -p test-preload-20220221084430-6550 -- docker pull gcr.io/k8s-minikube/busybox preload_test.go:62: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-20220221084430-6550 -- docker pull gcr.io/k8s-minikube/busybox: (1.643286739s) preload_test.go:72: (dbg) Run: out/minikube-linux-amd64 start -p test-preload-20220221084430-6550 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --container-runtime=docker --kubernetes-version=v1.17.3 preload_test.go:72: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220221084430-6550 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --container-runtime=docker --kubernetes-version=v1.17.3: (31.576285286s) preload_test.go:81: (dbg) Run: out/minikube-linux-amd64 ssh -p test-preload-20220221084430-6550 -- docker images helpers_test.go:176: Cleaning up "test-preload-20220221084430-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p test-preload-20220221084430-6550 helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20220221084430-6550: (2.458961769s) --- PASS: TestPreload (115.70s) === RUN TestScheduledStopWindows scheduled_stop_test.go:43: test only runs on windows --- SKIP: TestScheduledStopWindows (0.00s) === RUN TestScheduledStopUnix scheduled_stop_test.go:129: (dbg) Run: out/minikube-linux-amd64 start -p scheduled-stop-20220221084626-6550 --memory=2048 --driver=docker --container-runtime=docker scheduled_stop_test.go:129: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20220221084626-6550 --memory=2048 --driver=docker --container-runtime=docker: (26.540480522s) scheduled_stop_test.go:138: (dbg) Run: out/minikube-linux-amd64 stop -p scheduled-stop-20220221084626-6550 --schedule 5m scheduled_stop_test.go:192: (dbg) Run: out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20220221084626-6550 -n scheduled-stop-20220221084626-6550 scheduled_stop_test.go:170: signal error was: scheduled_stop_test.go:138: (dbg) Run: out/minikube-linux-amd64 stop -p scheduled-stop-20220221084626-6550 --schedule 15s scheduled_stop_test.go:170: signal error was: os: process already finished scheduled_stop_test.go:138: (dbg) Run: out/minikube-linux-amd64 stop -p scheduled-stop-20220221084626-6550 --cancel-scheduled scheduled_stop_test.go:177: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220221084626-6550 -n scheduled-stop-20220221084626-6550 scheduled_stop_test.go:206: (dbg) Run: out/minikube-linux-amd64 status -p scheduled-stop-20220221084626-6550 scheduled_stop_test.go:138: (dbg) Run: out/minikube-linux-amd64 stop -p scheduled-stop-20220221084626-6550 --schedule 15s E0221 08:47:30.569702 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory scheduled_stop_test.go:170: signal error was: os: process already finished scheduled_stop_test.go:206: (dbg) Run: out/minikube-linux-amd64 status -p scheduled-stop-20220221084626-6550 scheduled_stop_test.go:206: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-20220221084626-6550: exit status 7 (90.984348ms) -- stdout -- scheduled-stop-20220221084626-6550 type: Control Plane host: Stopped kubelet: Stopped apiserver: Stopped kubeconfig: Stopped -- /stdout -- scheduled_stop_test.go:177: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220221084626-6550 -n scheduled-stop-20220221084626-6550 scheduled_stop_test.go:177: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220221084626-6550 -n scheduled-stop-20220221084626-6550: exit status 7 (94.406995ms) -- stdout -- Stopped -- /stdout -- scheduled_stop_test.go:177: status error: exit status 7 (may be ok) helpers_test.go:176: Cleaning up "scheduled-stop-20220221084626-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p scheduled-stop-20220221084626-6550 helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-20220221084626-6550: (1.894418949s) --- PASS: TestScheduledStopUnix (100.20s) === RUN TestSkaffold skaffold_test.go:57: (dbg) Run: /tmp/skaffold.exe2704910771 version skaffold_test.go:61: skaffold version: v1.35.2 skaffold_test.go:64: (dbg) Run: out/minikube-linux-amd64 start -p skaffold-20220221084806-6550 --memory=2600 --driver=docker --container-runtime=docker E0221 08:48:29.174559 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory skaffold_test.go:64: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-20220221084806-6550 --memory=2600 --driver=docker --container-runtime=docker: (26.383443302s) skaffold_test.go:84: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube skaffold_test.go:108: (dbg) Run: /tmp/skaffold.exe2704910771 run --minikube-profile skaffold-20220221084806-6550 --kube-context skaffold-20220221084806-6550 --status-check=true --port-forward=false --interactive=false skaffold_test.go:108: (dbg) Done: /tmp/skaffold.exe2704910771 run --minikube-profile skaffold-20220221084806-6550 --kube-context skaffold-20220221084806-6550 --status-check=true --port-forward=false --interactive=false: (32.448919915s) skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ... helpers_test.go:343: "leeroy-app-97df96546-g5gw9" [96e363e4-9129-462b-942b-cc4227f256e8] Running skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-app healthy within 5.011007515s skaffold_test.go:117: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ... helpers_test.go:343: "leeroy-web-755869c6cd-8rvzx" [43164fde-9927-470b-a814-d0cb93fd37bf] Running skaffold_test.go:117: (dbg) TestSkaffold: app=leeroy-web healthy within 5.006362064s helpers_test.go:176: Cleaning up "skaffold-20220221084806-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p skaffold-20220221084806-6550 helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-20220221084806-6550: (2.539499635s) --- PASS: TestSkaffold (72.09s) === RUN TestStartStop === PAUSE TestStartStop === RUN TestInsufficientStorage status_test.go:51: (dbg) Run: out/minikube-linux-amd64 start -p insufficient-storage-20220221084918-6550 --memory=2048 --output=json --wait=true --driver=docker --container-runtime=docker status_test.go:51: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-20220221084918-6550 --memory=2048 --output=json --wait=true --driver=docker --container-runtime=docker: exit status 26 (12.536910413s) -- stdout -- {"specversion":"1.0","id":"5859708e-ebba-4716-a2d4-c2c4920a1e7b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220221084918-6550] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}} {"specversion":"1.0","id":"538e945c-032f-491f-a044-eccd4bcf8fe8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=13641"}} {"specversion":"1.0","id":"6c2b35dd-d32b-4b5a-a555-7a85fef61e1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}} {"specversion":"1.0","id":"f35b79ec-7571-44d9-af0b-fa2c632f24a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig"}} {"specversion":"1.0","id":"8e169f6c-8d57-4d43-acb1-47021a826c25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube"}} {"specversion":"1.0","id":"2c61e3d2-5ea3-44fd-bafc-c6729ccb1bba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}} {"specversion":"1.0","id":"6284dcba-c2fb-488b-9a4c-191b1d5440a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}} {"specversion":"1.0","id":"4984f709-783d-48c6-8eaf-5abac16d31ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}} {"specversion":"1.0","id":"3c90423e-8d70-41e7-9a6d-742d0b2b22fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"Your cgroup does not allow setting memory."}} {"specversion":"1.0","id":"abd4b94d-7a96-4ca4-bc9d-3f7c89b17ab7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities"}} {"specversion":"1.0","id":"84b2c6d0-0cc3-4593-a200-3e4d240ec803","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220221084918-6550 in cluster insufficient-storage-20220221084918-6550","name":"Starting Node","totalsteps":"19"}} {"specversion":"1.0","id":"75897f30-7037-45cf-8202-b906844bd7eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}} {"specversion":"1.0","id":"ea5e25e6-7ad2-4e18-a882-4f278b2ad1e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}} {"specversion":"1.0","id":"3002905e-2ca4-4706-8588-e870af752d95","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity)","name":"RSRC_DOCKER_STORAGE","url":""}} -- /stdout -- status_test.go:77: (dbg) Run: out/minikube-linux-amd64 status -p insufficient-storage-20220221084918-6550 --output=json --layout=cluster status_test.go:77: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20220221084918-6550 --output=json --layout=cluster: exit status 7 (354.126224ms) -- stdout -- {"Name":"insufficient-storage-20220221084918-6550","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.25.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220221084918-6550","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]} -- /stdout -- ** stderr ** E0221 08:49:31.427744 138609 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220221084918-6550" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig ** /stderr ** status_test.go:77: (dbg) Run: out/minikube-linux-amd64 status -p insufficient-storage-20220221084918-6550 --output=json --layout=cluster status_test.go:77: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20220221084918-6550 --output=json --layout=cluster: exit status 7 (351.028385ms) -- stdout -- {"Name":"insufficient-storage-20220221084918-6550","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.25.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220221084918-6550","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]} -- /stdout -- ** stderr ** E0221 08:49:31.779073 138709 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220221084918-6550" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig E0221 08:49:31.791334 138709 status.go:557] unable to read event log: stat: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/insufficient-storage-20220221084918-6550/events.json: no such file or directory ** /stderr ** helpers_test.go:176: Cleaning up "insufficient-storage-20220221084918-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p insufficient-storage-20220221084918-6550 E0221 08:49:33.148617 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-20220221084918-6550: (1.967541664s) --- PASS: TestInsufficientStorage (15.21s) === RUN TestRunningBinaryUpgrade === PAUSE TestRunningBinaryUpgrade === RUN TestStoppedBinaryUpgrade === PAUSE TestStoppedBinaryUpgrade === RUN TestKubernetesUpgrade === PAUSE TestKubernetesUpgrade === RUN TestMissingContainerUpgrade === PAUSE TestMissingContainerUpgrade === CONT TestOffline === CONT TestMissingContainerUpgrade === CONT TestNetworkPlugins === CONT TestRunningBinaryUpgrade === RUN TestNetworkPlugins/group === RUN TestNetworkPlugins/group/auto === CONT TestOffline aab_offline_test.go:56: (dbg) Run: out/minikube-linux-amd64 start -p offline-docker-20220221084933-6550 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker --container-runtime=docker === PAUSE TestNetworkPlugins/group/auto === RUN TestNetworkPlugins/group/kubenet === PAUSE TestNetworkPlugins/group/kubenet === RUN TestNetworkPlugins/group/bridge === PAUSE TestNetworkPlugins/group/bridge === RUN TestNetworkPlugins/group/enable-default-cni === PAUSE TestNetworkPlugins/group/enable-default-cni === RUN TestNetworkPlugins/group/flannel net_test.go:77: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory helpers_test.go:176: Cleaning up "flannel-20220221084933-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p flannel-20220221084933-6550 === RUN TestNetworkPlugins/group/kindnet === PAUSE TestNetworkPlugins/group/kindnet === RUN TestNetworkPlugins/group/false === PAUSE TestNetworkPlugins/group/false === RUN TestNetworkPlugins/group/custom-weave === PAUSE TestNetworkPlugins/group/custom-weave === RUN TestNetworkPlugins/group/calico === PAUSE TestNetworkPlugins/group/calico === RUN TestNetworkPlugins/group/cilium === PAUSE TestNetworkPlugins/group/cilium === CONT TestKVMDriverInstallOrUpdate === CONT TestMissingContainerUpgrade version_upgrade_test.go:316: (dbg) Run: /tmp/minikube-v1.9.1.1745708248.exe start -p missing-upgrade-20220221084933-6550 --memory=2200 --driver=docker --container-runtime=docker === CONT TestRunningBinaryUpgrade version_upgrade_test.go:127: (dbg) Run: /tmp/minikube-v1.9.0.1124988089.exe start -p running-upgrade-20220221084933-6550 --memory=2200 --vm-driver=docker --container-runtime=docker --- PASS: TestKVMDriverInstallOrUpdate (8.40s) === CONT TestForceSystemdEnv docker_test.go:151: (dbg) Run: out/minikube-linux-amd64 start -p force-systemd-env-20220221084942-6550 --memory=2048 --alsologtostderr -v=5 --driver=docker --container-runtime=docker E0221 08:49:52.220676 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory docker_test.go:151: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20220221084942-6550 --memory=2048 --alsologtostderr -v=5 --driver=docker --container-runtime=docker: (34.486715639s) docker_test.go:105: (dbg) Run: out/minikube-linux-amd64 -p force-systemd-env-20220221084942-6550 ssh "docker info --format {{.CgroupDriver}}" helpers_test.go:176: Cleaning up "force-systemd-env-20220221084942-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p force-systemd-env-20220221084942-6550 helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20220221084942-6550: (7.011634254s) --- PASS: TestForceSystemdEnv (42.01s) === CONT TestForceSystemdFlag docker_test.go:86: (dbg) Run: out/minikube-linux-amd64 start -p force-systemd-flag-20220221085024-6550 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=docker === CONT TestOffline aab_offline_test.go:56: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-20220221084933-6550 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker --container-runtime=docker: (1m3.174570623s) helpers_test.go:176: Cleaning up "offline-docker-20220221084933-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p offline-docker-20220221084933-6550 helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-20220221084933-6550: (2.814225484s) --- PASS: TestOffline (65.99s) === CONT TestDockerFlags docker_test.go:46: (dbg) Run: out/minikube-linux-amd64 start -p docker-flags-20220221085039-6550 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker --container-runtime=docker === CONT TestForceSystemdFlag docker_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20220221085024-6550 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=docker: (38.132852763s) docker_test.go:105: (dbg) Run: out/minikube-linux-amd64 -p force-systemd-flag-20220221085024-6550 ssh "docker info --format {{.CgroupDriver}}" helpers_test.go:176: Cleaning up "force-systemd-flag-20220221085024-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p force-systemd-flag-20220221085024-6550 === CONT TestMissingContainerUpgrade version_upgrade_test.go:316: (dbg) Done: /tmp/minikube-v1.9.1.1745708248.exe start -p missing-upgrade-20220221084933-6550 --memory=2200 --driver=docker --container-runtime=docker: (1m29.754236748s) version_upgrade_test.go:325: (dbg) Run: docker stop missing-upgrade-20220221084933-6550 === CONT TestRunningBinaryUpgrade version_upgrade_test.go:127: (dbg) Done: /tmp/minikube-v1.9.0.1124988089.exe start -p running-upgrade-20220221084933-6550 --memory=2200 --vm-driver=docker --container-runtime=docker: (1m29.740882811s) version_upgrade_test.go:137: (dbg) Run: out/minikube-linux-amd64 start -p running-upgrade-20220221084933-6550 --memory=2200 --alsologtostderr -v=1 --driver=docker --container-runtime=docker === CONT TestForceSystemdFlag helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20220221085024-6550: (2.703799425s) --- PASS: TestForceSystemdFlag (41.42s) === CONT TestCertExpiration cert_options_test.go:124: (dbg) Run: out/minikube-linux-amd64 start -p cert-expiration-20220221085105-6550 --memory=2048 --cert-expiration=3m --driver=docker --container-runtime=docker === CONT TestMissingContainerUpgrade version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-20220221084933-6550: (10.517784599s) version_upgrade_test.go:330: (dbg) Run: docker rm missing-upgrade-20220221084933-6550 version_upgrade_test.go:336: (dbg) Run: out/minikube-linux-amd64 start -p missing-upgrade-20220221084933-6550 --memory=2200 --alsologtostderr -v=1 --driver=docker --container-runtime=docker === CONT TestDockerFlags docker_test.go:46: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-20220221085039-6550 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker --container-runtime=docker: (37.904788827s) docker_test.go:51: (dbg) Run: out/minikube-linux-amd64 -p docker-flags-20220221085039-6550 ssh "sudo systemctl show docker --property=Environment --no-pager" docker_test.go:62: (dbg) Run: out/minikube-linux-amd64 -p docker-flags-20220221085039-6550 ssh "sudo systemctl show docker --property=ExecStart --no-pager" helpers_test.go:176: Cleaning up "docker-flags-20220221085039-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p docker-flags-20220221085039-6550 helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-20220221085039-6550: (2.805789535s) --- PASS: TestDockerFlags (41.63s) === CONT TestCertOptions cert_options_test.go:50: (dbg) Run: out/minikube-linux-amd64 start -p cert-options-20220221085121-6550 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --container-runtime=docker === CONT TestRunningBinaryUpgrade version_upgrade_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-20220221084933-6550 --memory=2200 --alsologtostderr -v=1 --driver=docker --container-runtime=docker: (34.628526866s) helpers_test.go:176: Cleaning up "running-upgrade-20220221084933-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p running-upgrade-20220221084933-6550 === CONT TestCertExpiration cert_options_test.go:124: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220221085105-6550 --memory=2048 --cert-expiration=3m --driver=docker --container-runtime=docker: (34.151583726s) === CONT TestRunningBinaryUpgrade helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20220221084933-6550: (2.830409722s) --- PASS: TestRunningBinaryUpgrade (127.80s) === CONT TestKubernetesUpgrade version_upgrade_test.go:229: (dbg) Run: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220221085141-6550 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker --container-runtime=docker === CONT TestCertOptions cert_options_test.go:50: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20220221085121-6550 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --container-runtime=docker: (33.596057649s) cert_options_test.go:61: (dbg) Run: out/minikube-linux-amd64 -p cert-options-20220221085121-6550 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt" cert_options_test.go:89: (dbg) Run: kubectl --context cert-options-20220221085121-6550 config view cert_options_test.go:101: (dbg) Run: out/minikube-linux-amd64 ssh -p cert-options-20220221085121-6550 -- "sudo cat /etc/kubernetes/admin.conf" helpers_test.go:176: Cleaning up "cert-options-20220221085121-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p cert-options-20220221085121-6550 helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20220221085121-6550: (2.717251105s) --- PASS: TestCertOptions (37.18s) === CONT TestPause === RUN TestPause/serial === RUN TestPause/serial/Start pause_test.go:81: (dbg) Run: out/minikube-linux-amd64 start -p pause-20220221085158-6550 --memory=2048 --install-addons=false --wait=all --driver=docker --container-runtime=docker === CONT TestMissingContainerUpgrade version_upgrade_test.go:336: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-20220221084933-6550 --memory=2200 --alsologtostderr -v=1 --driver=docker --container-runtime=docker: (46.545683898s) helpers_test.go:176: Cleaning up "missing-upgrade-20220221084933-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p missing-upgrade-20220221084933-6550 helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-20220221084933-6550: (6.979946562s) --- PASS: TestMissingContainerUpgrade (154.43s) === CONT TestStartStop === RUN TestStartStop/group === RUN TestStartStop/group/old-k8s-version === PAUSE TestStartStop/group/old-k8s-version === RUN TestStartStop/group/newest-cni === PAUSE TestStartStop/group/newest-cni === RUN TestStartStop/group/default-k8s-different-port === PAUSE TestStartStop/group/default-k8s-different-port === RUN TestStartStop/group/no-preload === PAUSE TestStartStop/group/no-preload === RUN TestStartStop/group/disable-driver-mounts === PAUSE TestStartStop/group/disable-driver-mounts === RUN TestStartStop/group/embed-certs === PAUSE TestStartStop/group/embed-certs === CONT TestNoKubernetes === RUN TestNoKubernetes/serial === RUN TestNoKubernetes/serial/StartNoK8sWithVersion no_kubernetes_test.go:84: (dbg) Run: out/minikube-linux-amd64 start -p NoKubernetes-20220221085208-6550 --no-kubernetes --kubernetes-version=1.20 --driver=docker --container-runtime=docker no_kubernetes_test.go:84: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-20220221085208-6550 --no-kubernetes --kubernetes-version=1.20 --driver=docker --container-runtime=docker: exit status 14 (109.051431ms) -- stdout -- * [NoKubernetes-20220221085208-6550] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64) - MINIKUBE_LOCATION=13641 - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube - MINIKUBE_BIN=out/minikube-linux-amd64 -- /stdout -- ** stderr ** X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes, to unset a global config run: $ minikube config unset kubernetes-version ** /stderr ** === RUN TestNoKubernetes/serial/StartWithK8s no_kubernetes_test.go:96: (dbg) Run: out/minikube-linux-amd64 start -p NoKubernetes-20220221085208-6550 --driver=docker --container-runtime=docker E0221 08:52:30.569318 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory no_kubernetes_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220221085208-6550 --driver=docker --container-runtime=docker: (25.501896034s) no_kubernetes_test.go:201: (dbg) Run: out/minikube-linux-amd64 -p NoKubernetes-20220221085208-6550 status -o json === RUN TestNoKubernetes/serial/StartWithStopK8s no_kubernetes_test.go:113: (dbg) Run: out/minikube-linux-amd64 start -p NoKubernetes-20220221085208-6550 --no-kubernetes --driver=docker --container-runtime=docker === CONT TestKubernetesUpgrade version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220221085141-6550 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker --container-runtime=docker: (53.980967028s) version_upgrade_test.go:234: (dbg) Run: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220221085141-6550 version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220221085141-6550: (1.362954694s) version_upgrade_test.go:239: (dbg) Run: out/minikube-linux-amd64 -p kubernetes-upgrade-20220221085141-6550 status --format={{.Host}} version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20220221085141-6550 status --format={{.Host}}: exit status 7 (98.89977ms) -- stdout -- Stopped -- /stdout -- version_upgrade_test.go:241: status error: exit status 7 (may be ok) version_upgrade_test.go:250: (dbg) Run: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220221085141-6550 --memory=2200 --kubernetes-version=v1.23.5-rc.0 --alsologtostderr -v=1 --driver=docker --container-runtime=docker === CONT TestPause/serial/Start pause_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220221085158-6550 --memory=2048 --install-addons=false --wait=all --driver=docker --container-runtime=docker: (48.042023343s) === RUN TestPause/serial/SecondStartNoReconfiguration pause_test.go:93: (dbg) Run: out/minikube-linux-amd64 start -p pause-20220221085158-6550 --alsologtostderr -v=1 --driver=docker --container-runtime=docker === CONT TestNoKubernetes/serial/StartWithStopK8s no_kubernetes_test.go:113: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220221085208-6550 --no-kubernetes --driver=docker --container-runtime=docker: (14.356059262s) no_kubernetes_test.go:201: (dbg) Run: out/minikube-linux-amd64 -p NoKubernetes-20220221085208-6550 status -o json no_kubernetes_test.go:201: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-20220221085208-6550 status -o json: exit status 2 (480.144285ms) -- stdout -- {"Name":"NoKubernetes-20220221085208-6550","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false} -- /stdout -- no_kubernetes_test.go:125: (dbg) Run: out/minikube-linux-amd64 delete -p NoKubernetes-20220221085208-6550 no_kubernetes_test.go:125: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-20220221085208-6550: (2.698155884s) === RUN TestNoKubernetes/serial/Start no_kubernetes_test.go:137: (dbg) Run: out/minikube-linux-amd64 start -p NoKubernetes-20220221085208-6550 --no-kubernetes --driver=docker --container-runtime=docker no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220221085208-6550 --no-kubernetes --driver=docker --container-runtime=docker: (7.004906892s) === RUN TestNoKubernetes/serial/VerifyK8sNotRunning no_kubernetes_test.go:148: (dbg) Run: out/minikube-linux-amd64 ssh -p NoKubernetes-20220221085208-6550 "sudo systemctl is-active --quiet service kubelet" no_kubernetes_test.go:148: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220221085208-6550 "sudo systemctl is-active --quiet service kubelet": exit status 1 (391.391933ms) ** stderr ** ssh: Process exited with status 3 ** /stderr ** === RUN TestNoKubernetes/serial/ProfileList no_kubernetes_test.go:170: (dbg) Run: out/minikube-linux-amd64 profile list no_kubernetes_test.go:170: (dbg) Done: out/minikube-linux-amd64 profile list: (5.142586841s) no_kubernetes_test.go:180: (dbg) Run: out/minikube-linux-amd64 profile list --output=json no_kubernetes_test.go:180: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (1.048868831s) === RUN TestNoKubernetes/serial/Stop no_kubernetes_test.go:159: (dbg) Run: out/minikube-linux-amd64 stop -p NoKubernetes-20220221085208-6550 === CONT TestKubernetesUpgrade version_upgrade_test.go:250: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220221085141-6550 --memory=2200 --kubernetes-version=v1.23.5-rc.0 --alsologtostderr -v=1 --driver=docker --container-runtime=docker: (28.655754657s) version_upgrade_test.go:255: (dbg) Run: kubectl --context kubernetes-upgrade-20220221085141-6550 version --output=json version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail) version_upgrade_test.go:276: (dbg) Run: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220221085141-6550 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker --container-runtime=docker version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220221085141-6550 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker --container-runtime=docker: exit status 106 (79.898977ms) -- stdout -- * [kubernetes-upgrade-20220221085141-6550] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64) - MINIKUBE_LOCATION=13641 - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube - MINIKUBE_BIN=out/minikube-linux-amd64 -- /stdout -- ** stderr ** X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.23.5-rc.0 cluster to v1.16.0 * Suggestion: 1) Recreate the cluster with Kubernetes 1.16.0, by running: minikube delete -p kubernetes-upgrade-20220221085141-6550 minikube start -p kubernetes-upgrade-20220221085141-6550 --kubernetes-version=v1.16.0 2) Create a second cluster with Kubernetes 1.16.0, by running: minikube start -p kubernetes-upgrade-20220221085141-65502 --kubernetes-version=v1.16.0 3) Use the existing cluster at version Kubernetes 1.23.5-rc.0, by running: minikube start -p kubernetes-upgrade-20220221085141-6550 --kubernetes-version=v1.23.5-rc.0 ** /stderr ** version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade version_upgrade_test.go:282: (dbg) Run: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220221085141-6550 --memory=2200 --kubernetes-version=v1.23.5-rc.0 --alsologtostderr -v=1 --driver=docker --container-runtime=docker === CONT TestNoKubernetes/serial/Stop no_kubernetes_test.go:159: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-20220221085208-6550: (1.312518813s) === RUN TestNoKubernetes/serial/StartNoArgs no_kubernetes_test.go:192: (dbg) Run: out/minikube-linux-amd64 start -p NoKubernetes-20220221085208-6550 --driver=docker --container-runtime=docker no_kubernetes_test.go:192: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220221085208-6550 --driver=docker --container-runtime=docker: (6.322407974s) === RUN TestNoKubernetes/serial/VerifyK8sNotRunningSecond no_kubernetes_test.go:148: (dbg) Run: out/minikube-linux-amd64 ssh -p NoKubernetes-20220221085208-6550 "sudo systemctl is-active --quiet service kubelet" no_kubernetes_test.go:148: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220221085208-6550 "sudo systemctl is-active --quiet service kubelet": exit status 1 (398.615723ms) ** stderr ** ssh: Process exited with status 3 ** /stderr ** === CONT TestNoKubernetes helpers_test.go:176: Cleaning up "NoKubernetes-20220221085208-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p NoKubernetes-20220221085208-6550 helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-20220221085208-6550: (2.023692826s) --- PASS: TestNoKubernetes (67.24s) --- PASS: TestNoKubernetes/serial (65.22s) --- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s) --- PASS: TestNoKubernetes/serial/StartWithK8s (25.95s) --- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.53s) --- PASS: TestNoKubernetes/serial/Start (7.01s) --- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s) --- PASS: TestNoKubernetes/serial/ProfileList (6.19s) --- PASS: TestNoKubernetes/serial/Stop (1.31s) --- PASS: TestNoKubernetes/serial/StartNoArgs (6.32s) --- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.40s) === CONT TestStoppedBinaryUpgrade === RUN TestStoppedBinaryUpgrade/Setup === RUN TestStoppedBinaryUpgrade/Upgrade version_upgrade_test.go:190: (dbg) Run: /tmp/minikube-v1.9.0.3521107852.exe start -p stopped-upgrade-20220221085315-6550 --memory=2200 --vm-driver=docker --container-runtime=docker === CONT TestKubernetesUpgrade version_upgrade_test.go:282: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220221085141-6550 --memory=2200 --kubernetes-version=v1.23.5-rc.0 --alsologtostderr -v=1 --driver=docker --container-runtime=docker: (16.169707155s) helpers_test.go:176: Cleaning up "kubernetes-upgrade-20220221085141-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220221085141-6550 === CONT TestPause/serial/SecondStartNoReconfiguration pause_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220221085158-6550 --alsologtostderr -v=1 --driver=docker --container-runtime=docker: (38.96683484s) === RUN TestPause/serial/Pause pause_test.go:111: (dbg) Run: out/minikube-linux-amd64 pause -p pause-20220221085158-6550 --alsologtostderr -v=5 pause_test.go:111: (dbg) Done: out/minikube-linux-amd64 pause -p pause-20220221085158-6550 --alsologtostderr -v=5: (1.213350307s) === RUN TestPause/serial/VerifyStatus status_test.go:77: (dbg) Run: out/minikube-linux-amd64 status -p pause-20220221085158-6550 --output=json --layout=cluster status_test.go:77: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20220221085158-6550 --output=json --layout=cluster: exit status 2 (396.48917ms) -- stdout -- {"Name":"pause-20220221085158-6550","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.25.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20220221085158-6550","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]} -- /stdout -- === RUN TestPause/serial/Unpause pause_test.go:122: (dbg) Run: out/minikube-linux-amd64 unpause -p pause-20220221085158-6550 --alsologtostderr -v=5 === RUN TestPause/serial/PauseAgain pause_test.go:111: (dbg) Run: out/minikube-linux-amd64 pause -p pause-20220221085158-6550 --alsologtostderr -v=5 === RUN TestPause/serial/DeletePaused pause_test.go:133: (dbg) Run: out/minikube-linux-amd64 delete -p pause-20220221085158-6550 --alsologtostderr -v=5 E0221 08:53:29.174477 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory === CONT TestKubernetesUpgrade helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220221085141-6550: (7.24090839s) --- PASS: TestKubernetesUpgrade (107.64s) === CONT TestNetworkPlugins/group/auto === RUN TestNetworkPlugins/group/auto/Start net_test.go:99: (dbg) Run: out/minikube-linux-amd64 start -p auto-20220221084933-6550 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker --container-runtime=docker === CONT TestPause/serial/DeletePaused pause_test.go:133: (dbg) Done: out/minikube-linux-amd64 delete -p pause-20220221085158-6550 --alsologtostderr -v=5: (3.011218223s) === RUN TestPause/serial/VerifyDeletedResources pause_test.go:143: (dbg) Run: out/minikube-linux-amd64 profile list --output json pause_test.go:169: (dbg) Run: docker ps -a pause_test.go:174: (dbg) Run: docker volume inspect pause-20220221085158-6550 pause_test.go:174: (dbg) Non-zero exit: docker volume inspect pause-20220221085158-6550: exit status 1 (39.876123ms) -- stdout -- [] -- /stdout -- ** stderr ** Error: No such volume: pause-20220221085158-6550 ** /stderr ** pause_test.go:179: (dbg) Run: docker network ls === CONT TestPause helpers_test.go:176: Cleaning up "pause-20220221085158-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p pause-20220221085158-6550 --- PASS: TestPause (94.83s) --- PASS: TestPause/serial (94.47s) --- PASS: TestPause/serial/Start (48.04s) --- PASS: TestPause/serial/SecondStartNoReconfiguration (38.98s) --- PASS: TestPause/serial/Pause (1.21s) --- PASS: TestPause/serial/VerifyStatus (0.40s) --- PASS: TestPause/serial/Unpause (0.96s) --- PASS: TestPause/serial/PauseAgain (0.99s) --- PASS: TestPause/serial/DeletePaused (3.01s) --- PASS: TestPause/serial/VerifyDeletedResources (0.88s) === CONT TestNetworkPlugins/group/cilium === RUN TestNetworkPlugins/group/cilium/Start net_test.go:99: (dbg) Run: out/minikube-linux-amd64 start -p cilium-20220221084934-6550 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker --container-runtime=docker E0221 08:53:53.613729 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory === CONT TestStoppedBinaryUpgrade/Upgrade version_upgrade_test.go:190: (dbg) Done: /tmp/minikube-v1.9.0.3521107852.exe start -p stopped-upgrade-20220221085315-6550 --memory=2200 --vm-driver=docker --container-runtime=docker: (42.224250293s) version_upgrade_test.go:199: (dbg) Run: /tmp/minikube-v1.9.0.3521107852.exe -p stopped-upgrade-20220221085315-6550 stop version_upgrade_test.go:199: (dbg) Done: /tmp/minikube-v1.9.0.3521107852.exe -p stopped-upgrade-20220221085315-6550 stop: (2.418417317s) version_upgrade_test.go:205: (dbg) Run: out/minikube-linux-amd64 start -p stopped-upgrade-20220221085315-6550 --memory=2200 --alsologtostderr -v=1 --driver=docker --container-runtime=docker E0221 08:54:05.983915 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory E0221 08:54:05.989228 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory E0221 08:54:05.999499 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory E0221 08:54:06.019761 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory E0221 08:54:06.060107 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory E0221 08:54:06.140432 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory E0221 08:54:06.301581 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory E0221 08:54:06.621704 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory E0221 08:54:07.262034 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory E0221 08:54:08.542192 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory E0221 08:54:11.102802 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory E0221 08:54:16.223627 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory E0221 08:54:26.464223 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory version_upgrade_test.go:205: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-20220221085315-6550 --memory=2200 --alsologtostderr -v=1 --driver=docker --container-runtime=docker: (26.398707461s) === RUN TestStoppedBinaryUpgrade/MinikubeLogs version_upgrade_test.go:213: (dbg) Run: out/minikube-linux-amd64 logs -p stopped-upgrade-20220221085315-6550 version_upgrade_test.go:213: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-20220221085315-6550: (2.116740444s) === CONT TestStoppedBinaryUpgrade helpers_test.go:176: Cleaning up "stopped-upgrade-20220221085315-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p stopped-upgrade-20220221085315-6550 helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p stopped-upgrade-20220221085315-6550: (2.494414532s) --- PASS: TestStoppedBinaryUpgrade (76.18s) --- PASS: TestStoppedBinaryUpgrade/Setup (0.52s) --- PASS: TestStoppedBinaryUpgrade/Upgrade (71.04s) --- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.12s) === CONT TestNetworkPlugins/group/calico === RUN TestNetworkPlugins/group/calico/Start net_test.go:99: (dbg) Run: out/minikube-linux-amd64 start -p calico-20220221084934-6550 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker --container-runtime=docker E0221 08:54:33.149049 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory === CONT TestCertExpiration cert_options_test.go:132: (dbg) Run: out/minikube-linux-amd64 start -p cert-expiration-20220221085105-6550 --memory=2048 --cert-expiration=8760h --driver=docker --container-runtime=docker cert_options_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220221085105-6550 --memory=2048 --cert-expiration=8760h --driver=docker --container-runtime=docker: (4.610638724s) helpers_test.go:176: Cleaning up "cert-expiration-20220221085105-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p cert-expiration-20220221085105-6550 E0221 08:54:46.945048 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-20220221085105-6550: (2.794123983s) --- PASS: TestCertExpiration (221.56s) === CONT TestNetworkPlugins/group/custom-weave === RUN TestNetworkPlugins/group/custom-weave/Start net_test.go:99: (dbg) Run: out/minikube-linux-amd64 start -p custom-weave-20220221084934-6550 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker --container-runtime=docker === CONT TestNetworkPlugins/group/cilium/Start net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20220221084934-6550 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker --container-runtime=docker: (1m37.39847718s) === RUN TestNetworkPlugins/group/cilium/ControllerPod net_test.go:107: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ... helpers_test.go:343: "cilium-sxnkv" [134d1d0b-c8f4-489d-8794-db615edfa31d] Running net_test.go:107: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.017061078s === RUN TestNetworkPlugins/group/cilium/KubeletFlags net_test.go:120: (dbg) Run: out/minikube-linux-amd64 ssh -p cilium-20220221084934-6550 "pgrep -a kubelet" === RUN TestNetworkPlugins/group/cilium/NetCatPod net_test.go:132: (dbg) Run: kubectl --context cilium-20220221084934-6550 replace --force -f testdata/netcat-deployment.yaml net_test.go:146: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ... helpers_test.go:343: "netcat-668db85669-zgfg6" [e52d4934-5efb-4eb7-86bc-b5662d466b3c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils]) helpers_test.go:343: "netcat-668db85669-zgfg6" [e52d4934-5efb-4eb7-86bc-b5662d466b3c] Running E0221 08:55:27.905555 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory net_test.go:146: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 12.007096203s === RUN TestNetworkPlugins/group/cilium/DNS net_test.go:163: (dbg) Run: kubectl --context cilium-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default === RUN TestNetworkPlugins/group/cilium/Localhost net_test.go:182: (dbg) Run: kubectl --context cilium-20220221084934-6550 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080" === RUN TestNetworkPlugins/group/cilium/HairPin net_test.go:232: (dbg) Run: kubectl --context cilium-20220221084934-6550 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080" === CONT TestNetworkPlugins/group/cilium net_test.go:198: "cilium" test finished in 5m55.623126442s, failed=false helpers_test.go:176: Cleaning up "cilium-20220221084934-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p cilium-20220221084934-6550 helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cilium-20220221084934-6550: (3.380219603s) === CONT TestNetworkPlugins/group/false === RUN TestNetworkPlugins/group/false/Start net_test.go:99: (dbg) Run: out/minikube-linux-amd64 start -p false-20220221084934-6550 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker --container-runtime=docker E0221 08:55:56.193062 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p false-20220221084934-6550 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker --container-runtime=docker: (42.767454452s) === RUN TestNetworkPlugins/group/false/KubeletFlags net_test.go:120: (dbg) Run: out/minikube-linux-amd64 ssh -p false-20220221084934-6550 "pgrep -a kubelet" === RUN TestNetworkPlugins/group/false/NetCatPod net_test.go:132: (dbg) Run: kubectl --context false-20220221084934-6550 replace --force -f testdata/netcat-deployment.yaml net_test.go:146: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ... helpers_test.go:343: "netcat-668db85669-gl7hj" [ba6605ea-dfed-40ce-83bd-cbd1b3c35da1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils]) helpers_test.go:343: "netcat-668db85669-gl7hj" [ba6605ea-dfed-40ce-83bd-cbd1b3c35da1] Running net_test.go:146: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.006431582s === RUN TestNetworkPlugins/group/false/DNS net_test.go:163: (dbg) Run: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.166804753s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default E0221 08:56:49.826677 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.200105307s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.141837539s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default E0221 08:57:30.568649 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.135333876s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.15077008s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.16303599s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default E0221 08:58:29.174144 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.136895197s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.156602484s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 08:59:05.984234 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory net_test.go:163: (dbg) Run: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default E0221 08:59:33.149060 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.249345945s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 08:59:33.667848 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory net_test.go:163: (dbg) Run: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.151779206s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 09:00:10.800062 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory E0221 09:00:10.805340 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory E0221 09:00:10.815646 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory E0221 09:00:10.835911 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory E0221 09:00:10.876175 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory E0221 09:00:10.956525 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory E0221 09:00:11.116743 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory E0221 09:00:11.437135 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory E0221 09:00:12.077939 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory E0221 09:00:13.358145 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory E0221 09:00:15.918473 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory E0221 09:00:21.038745 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory E0221 09:00:31.279147 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory net_test.go:163: (dbg) Run: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default E0221 09:00:51.760221 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.157586177s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 09:01:32.721004 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory === CONT TestNetworkPlugins/group/auto/Start net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p auto-20220221084933-6550 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker --container-runtime=docker: (8m16.112122028s) === RUN TestNetworkPlugins/group/auto/KubeletFlags net_test.go:120: (dbg) Run: out/minikube-linux-amd64 ssh -p auto-20220221084933-6550 "pgrep -a kubelet" === RUN TestNetworkPlugins/group/auto/NetCatPod net_test.go:132: (dbg) Run: kubectl --context auto-20220221084933-6550 replace --force -f testdata/netcat-deployment.yaml net_test.go:146: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ... helpers_test.go:343: "netcat-668db85669-v8bk5" [5544bafb-ba1b-44ac-aa68-7b9c71bd7d70] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils]) helpers_test.go:343: "netcat-668db85669-v8bk5" [5544bafb-ba1b-44ac-aa68-7b9c71bd7d70] Running net_test.go:146: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.006416423s === RUN TestNetworkPlugins/group/auto/DNS net_test.go:163: (dbg) Run: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.157342824s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default === CONT TestNetworkPlugins/group/false/DNS net_test.go:163: (dbg) Run: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default === CONT TestNetworkPlugins/group/auto/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.159318154s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 09:02:30.569358 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory net_test.go:163: (dbg) Run: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default === CONT TestNetworkPlugins/group/false/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.165653864s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1 net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"* === CONT TestNetworkPlugins/group/false net_test.go:198: "false" test finished in 13m6.955811393s, failed=true net_test.go:199: *** TestNetworkPlugins/group/false FAILED at 2022-02-21 09:02:40.957031533 +0000 UTC m=+2253.719351118 helpers_test.go:223: -----------------------post-mortem-------------------------------- helpers_test.go:231: ======> post-mortem[TestNetworkPlugins/group/false]: docker inspect <====== helpers_test.go:232: (dbg) Run: docker inspect false-20220221084934-6550 helpers_test.go:236: (dbg) docker inspect false-20220221084934-6550: -- stdout -- [ { "Id": "15fce63787daed6db999b1f309207ad80e337bb9e4d52f5345338a7851dc6bbf", "Created": "2022-02-21T08:55:40.800071805Z", "Path": "/usr/local/bin/entrypoint", "Args": [ "/sbin/init" ], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 241367, "ExitCode": 0, "Error": "", "StartedAt": "2022-02-21T08:55:41.193088059Z", "FinishedAt": "0001-01-01T00:00:00Z" }, "Image": "sha256:1312ccd2422d964b2df363d606d0c016d6acbc1ddf0211c26a74717f2897dc43", "ResolvConfPath": "/var/lib/docker/containers/15fce63787daed6db999b1f309207ad80e337bb9e4d52f5345338a7851dc6bbf/resolv.conf", "HostnamePath": "/var/lib/docker/containers/15fce63787daed6db999b1f309207ad80e337bb9e4d52f5345338a7851dc6bbf/hostname", "HostsPath": "/var/lib/docker/containers/15fce63787daed6db999b1f309207ad80e337bb9e4d52f5345338a7851dc6bbf/hosts", "LogPath": "/var/lib/docker/containers/15fce63787daed6db999b1f309207ad80e337bb9e4d52f5345338a7851dc6bbf/15fce63787daed6db999b1f309207ad80e337bb9e4d52f5345338a7851dc6bbf-json.log", "Name": "/false-20220221084934-6550", "RestartCount": 0, "Driver": "overlay2", "Platform": "linux", "MountLabel": "", "ProcessLabel": "", "AppArmorProfile": "unconfined", "ExecIDs": null, "HostConfig": { "Binds": [ "/lib/modules:/lib/modules:ro", "false-20220221084934-6550:/var" ], "ContainerIDFile": "", "LogConfig": { "Type": "json-file", "Config": {} }, "NetworkMode": "false-20220221084934-6550", "PortBindings": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ] }, "RestartPolicy": { "Name": "no", "MaximumRetryCount": 0 }, "AutoRemove": false, "VolumeDriver": "", "VolumesFrom": null, "CapAdd": null, "CapDrop": null, "CgroupnsMode": "host", "Dns": [], "DnsOptions": [], "DnsSearch": [], "ExtraHosts": null, "GroupAdd": null, "IpcMode": "private", "Cgroup": "", "Links": null, "OomScoreAdj": 0, "PidMode": "", "Privileged": true, "PublishAllPorts": false, "ReadonlyRootfs": false, "SecurityOpt": [ "seccomp=unconfined", "apparmor=unconfined", "label=disable" ], "Tmpfs": { "/run": "", "/tmp": "" }, "UTSMode": "", "UsernsMode": "", "ShmSize": 67108864, "Runtime": "runc", "ConsoleSize": [ 0, 0 ], "Isolation": "", "CpuShares": 0, "Memory": 0, "NanoCpus": 2000000000, "CgroupParent": "", "BlkioWeight": 0, "BlkioWeightDevice": [], "BlkioDeviceReadBps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteIOps": null, "CpuPeriod": 0, "CpuQuota": 0, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": [ { "PathOnHost": "/dev/fuse", "PathInContainer": "/dev/fuse", "CgroupPermissions": "rwm" } ], "DeviceCgroupRules": null, "DeviceRequests": null, "KernelMemory": 0, "KernelMemoryTCP": 0, "MemoryReservation": 0, "MemorySwap": 0, "MemorySwappiness": null, "OomKillDisable": false, "PidsLimit": null, "Ulimits": null, "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0, "MaskedPaths": null, "ReadonlyPaths": null }, "GraphDriver": { "Data": { "LowerDir": "/var/lib/docker/overlay2/8476a74746b06da4b8103d2b58bd9ac39378d43f651a88ef625032c36ce98148-init/diff:/var/lib/docker/overlay2/a149cc5f42de5038ba3e5f21bd82a6356aa9456aec11a3690e569db72ca8eaf5/diff:/var/lib/docker/overlay2/fce4ef34d92ff253276822af091ec16578b924486e0e565c7f34b479bdbe1606/diff:/var/lib/docker/overlay2/a38e10ed288ad73a8ce637c6219d5b13318bb72210ce4fb6291dd57d238df611/diff:/var/lib/docker/overlay2/93c18ddce49a4eb2d344b0579d7f58df999d53bb329042f78a5b3f40296f83cd/diff:/var/lib/docker/overlay2/d9ee14262b31fff87924cf913d835b02182e6a8dc3783ecb5f67dbd29faa079c/diff:/var/lib/docker/overlay2/8a0b39fc5ebcb9761ad433102244050fab339d3e027a5a30a7e01d121fe362e4/diff:/var/lib/docker/overlay2/7a6b41deef37b6400fac0c94871f73040d4cae157de2e929b7a0bf96f0e353aa/diff:/var/lib/docker/overlay2/49d913ee587718efcf521a65de0109df5c2c42644f75690beed97a437d623b66/diff:/var/lib/docker/overlay2/89e8fa960313dfb6ece4007576bf5d89e2b0ea68a2bde9de5527647a7df6ff0f/diff:/var/lib/docker/overlay2/0d2632e6ee2f4198f3c7c0d37c31ee03d33864e0131ae85dd3001adecfae53db/diff:/var/lib/docker/overlay2/b28a3b2bbe2e231d00863aaaf6be28d2e5a76197f2be7934f18720a71f2435a8/diff:/var/lib/docker/overlay2/6f88e83ff5bb26bf909248d8df5eacf4d01f72523927fd7433a925c525fbd274/diff:/var/lib/docker/overlay2/c2547d18d2639fa842b0977952fa69cffd7b076e8db357ef0ec62fae9676619b/diff:/var/lib/docker/overlay2/34465ba37bb8eef87906c38cb891a738c54a16dbf47d27fef95cee96f0b62c4f/diff:/var/lib/docker/overlay2/50d3383518bb8e6a01e1cfcae621b8c2238dc54ae94d63606d145633045bc6cd/diff:/var/lib/docker/overlay2/83561f107aed2bf9cb840ff66aceb882d0e39bcced79ef27ce67c17e063ba536/diff:/var/lib/docker/overlay2/bfb7c0c91e0a7649286abe3a6b02df4585a931ffa3a6b1e4846aa1b98eb0d997/diff:/var/lib/docker/overlay2/80589044da151f3d3eed7824d236bc8a19697182abca8610eb34f1c3cae53bf0/diff:/var/lib/docker/overlay2/5a336e956e467ad50830aa4b9ed5b65cb3ef08e6d49a4900acd0440ca3c03c3a/diff:/var/lib/docker/overlay2/f7b3446c0dca80a2053183e360de8f99e3a31e87ad633fcc23f14b2045643596/diff:/var/lib/docker/overlay2/03688a16272f13f172ed8baa181df2a1702e08edbc9c5a4a767265adb2c00b79/diff:/var/lib/docker/overlay2/c5445afaabab58221dbffedde261b3f9948116ccba1344b6f3dd4fdfaf5c98cd/diff:/var/lib/docker/overlay2/704464fd20f23f93d0ad3c3c58103f01fc8a0d76d70bc191f718273e7412bb42/diff:/var/lib/docker/overlay2/a344f9ca1bf1b674ebf560c2a0498913ed90c836e3b4f5b1968022f7f7e54055/diff:/var/lib/docker/overlay2/929f053e6a327a7203e8d8c281f827d02ad922fdabbdfcc5453003daba88cdf4/diff:/var/lib/docker/overlay2/d6a160f0ffe94e5abf99d928a638f6a6220b24c7ba643a438c7557146d240f70/diff:/var/lib/docker/overlay2/c6b58e187a2df10fd55e19e6ad38c20f5ebf5e62617a59b6ce4afe0fc712c0f6/diff:/var/lib/docker/overlay2/fef43deeff878af42cc1f31a1f265b66e642aa166cec02eba5a0c82fb6d70872/diff:/var/lib/docker/overlay2/0d7bbf19182056a05efca89485df54da5fba9903000517fc9e7c2072aa7ba62e/diff:/var/lib/docker/overlay2/5840748964e9ce3890a2fa4fc2e3fb0635426f944b42b3128808a7aca816bf48/diff:/var/lib/docker/overlay2/a94126962e9f346960e3f570b96e0a6e987d255fcba30bf3fd76b227e66275c6/diff:/var/lib/docker/overlay2/211ba88e1557a352838b4896dff8efac70330c1424cff237613c4186994ec037/diff:/var/lib/docker/overlay2/c04720d23411d6693832990e2c79bc529ade609401eaeee8d159f5a6d74d986e/diff:/var/lib/docker/overlay2/70bfe328c89d3a3902a4ba591874fe936425b208ec8e39c2e25d60f21b212300/diff:/var/lib/docker/overlay2/7dcce8bec6dd9561387116c9f724b9101e94e15c0e19c7880ce2b4e96b2921a6/diff:/var/lib/docker/overlay2/81463813805e91be42d9f71b5c63823a80abaaa42d8638034eaddcd66c148310/diff:/var/lib/docker/overlay2/c7dbb1344cf7167b07d58d49902e0db9335bc92a71a324c15dadc14c0aa67c45/diff:/var/lib/docker/overlay2/909df0d6b92b92bcf6e376f0e33d120c53cd24ad0b997d31b5a84f3d1ab89769/diff:/var/lib/docker/overlay2/eacfeaee6ed4aa14e847adad2c6d03439c53944654cc2779cc5c80d239ab323c/diff:/var/lib/docker/overlay2/48c58492169f9043495b11b47439f4dcdeb168525e5bb3a8c130c1ddd5ec882d/diff:/var/lib/docker/overlay2/4520a69f91e766b7694473c2107cc0644394d65da900259eb5933779accdc86d/diff:/var/lib/docker/overlay2/68aafdd300c4b4ebf94976a352959e75fa170653a69e13da38650a447cd5cb2f/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394cfc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff", "MergedDir": "/var/lib/docker/overlay2/8476a74746b06da4b8103d2b58bd9ac39378d43f651a88ef625032c36ce98148/merged", "UpperDir": "/var/lib/docker/overlay2/8476a74746b06da4b8103d2b58bd9ac39378d43f651a88ef625032c36ce98148/diff", "WorkDir": "/var/lib/docker/overlay2/8476a74746b06da4b8103d2b58bd9ac39378d43f651a88ef625032c36ce98148/work" }, "Name": "overlay2" }, "Mounts": [ { "Type": "bind", "Source": "/lib/modules", "Destination": "/lib/modules", "Mode": "ro", "RW": false, "Propagation": "rprivate" }, { "Type": "volume", "Name": "false-20220221084934-6550", "Source": "/var/lib/docker/volumes/false-20220221084934-6550/_data", "Destination": "/var", "Driver": "local", "Mode": "z", "RW": true, "Propagation": "" } ], "Config": { "Hostname": "false-20220221084934-6550", "Domainname": "", "User": "root", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "ExposedPorts": { "22/tcp": {}, "2376/tcp": {}, "32443/tcp": {}, "5000/tcp": {}, "8443/tcp": {} }, "Tty": true, "OpenStdin": false, "StdinOnce": false, "Env": [ "container=docker", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" ], "Cmd": null, "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2", "Volumes": null, "WorkingDir": "", "Entrypoint": [ "/usr/local/bin/entrypoint", "/sbin/init" ], "OnBuild": null, "Labels": { "created_by.minikube.sigs.k8s.io": "true", "mode.minikube.sigs.k8s.io": "false-20220221084934-6550", "name.minikube.sigs.k8s.io": "false-20220221084934-6550", "role.minikube.sigs.k8s.io": "" }, "StopSignal": "SIGRTMIN+3" }, "NetworkSettings": { "Bridge": "", "SandboxID": "c211cf589b40d4695a2757fea5bb7e84dcd2b6ac82849ffdcdccf4a415c7b962", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49374" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49373" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49370" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49372" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49371" } ] }, "SandboxKey": "/var/run/docker/netns/c211cf589b40", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null, "EndpointID": "", "Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "MacAddress": "", "Networks": { "false-20220221084934-6550": { "IPAMConfig": { "IPv4Address": "192.168.49.2" }, "Links": null, "Aliases": [ "15fce63787da", "false-20220221084934-6550" ], "NetworkID": "3aad4971443d81c436ad1afc5aaa14cfa5d6ed96df4c643898db907a8582d794", "EndpointID": "dfdc0aaf7bd14326a76ea9cab50ae553d8a473ab0d8abcd391cdde039b786634", "Gateway": "192.168.49.1", "IPAddress": "192.168.49.2", "IPPrefixLen": 24, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:c0:a8:31:02", "DriverOpts": null } } } } ] -- /stdout -- helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p false-20220221084934-6550 -n false-20220221084934-6550 helpers_test.go:245: <<< TestNetworkPlugins/group/false FAILED: start of post-mortem logs <<< helpers_test.go:246: ======> post-mortem[TestNetworkPlugins/group/false]: minikube logs <====== helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 -p false-20220221084934-6550 logs -n 25 helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p false-20220221084934-6550 logs -n 25: (1.277167613s) helpers_test.go:253: TestNetworkPlugins/group/false logs: -- stdout -- * * ==> Audit <== * |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| | start | -p | kubernetes-upgrade-20220221085141-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:52:37 UTC | Mon, 21 Feb 2022 08:53:05 UTC | | | kubernetes-upgrade-20220221085141-6550 | | | | | | | | --memory=2200 | | | | | | | | --kubernetes-version=v1.23.5-rc.0 | | | | | | | | --alsologtostderr -v=1 --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | stop | -p | NoKubernetes-20220221085208-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:05 UTC | Mon, 21 Feb 2022 08:53:06 UTC | | | NoKubernetes-20220221085208-6550 | | | | | | | start | -p | NoKubernetes-20220221085208-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:06 UTC | Mon, 21 Feb 2022 08:53:13 UTC | | | NoKubernetes-20220221085208-6550 | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | delete | -p | NoKubernetes-20220221085208-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:13 UTC | Mon, 21 Feb 2022 08:53:15 UTC | | | NoKubernetes-20220221085208-6550 | | | | | | | start | -p | kubernetes-upgrade-20220221085141-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:05 UTC | Mon, 21 Feb 2022 08:53:21 UTC | | | kubernetes-upgrade-20220221085141-6550 | | | | | | | | --memory=2200 | | | | | | | | --kubernetes-version=v1.23.5-rc.0 | | | | | | | | --alsologtostderr -v=1 --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | start | -p pause-20220221085158-6550 | pause-20220221085158-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:52:46 UTC | Mon, 21 Feb 2022 08:53:25 UTC | | | --alsologtostderr | | | | | | | | -v=1 --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | pause | -p pause-20220221085158-6550 | pause-20220221085158-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:25 UTC | Mon, 21 Feb 2022 08:53:26 UTC | | | --alsologtostderr -v=5 | | | | | | | unpause | -p pause-20220221085158-6550 | pause-20220221085158-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:27 UTC | Mon, 21 Feb 2022 08:53:28 UTC | | | --alsologtostderr -v=5 | | | | | | | pause | -p pause-20220221085158-6550 | pause-20220221085158-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:28 UTC | Mon, 21 Feb 2022 08:53:29 UTC | | | --alsologtostderr -v=5 | | | | | | | delete | -p | kubernetes-upgrade-20220221085141-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:22 UTC | Mon, 21 Feb 2022 08:53:29 UTC | | | kubernetes-upgrade-20220221085141-6550 | | | | | | | delete | -p pause-20220221085158-6550 | pause-20220221085158-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:29 UTC | Mon, 21 Feb 2022 08:53:32 UTC | | | --alsologtostderr -v=5 | | | | | | | profile | list --output json | minikube | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:32 UTC | Mon, 21 Feb 2022 08:53:32 UTC | | delete | -p pause-20220221085158-6550 | pause-20220221085158-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:33 UTC | Mon, 21 Feb 2022 08:53:33 UTC | | start | -p | stopped-upgrade-20220221085315-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:00 UTC | Mon, 21 Feb 2022 08:54:26 UTC | | | stopped-upgrade-20220221085315-6550 | | | | | | | | --memory=2200 --alsologtostderr | | | | | | | | -v=1 --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | logs | -p | stopped-upgrade-20220221085315-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:27 UTC | Mon, 21 Feb 2022 08:54:29 UTC | | | stopped-upgrade-20220221085315-6550 | | | | | | | delete | -p | stopped-upgrade-20220221085315-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:29 UTC | Mon, 21 Feb 2022 08:54:31 UTC | | | stopped-upgrade-20220221085315-6550 | | | | | | | start | -p | cert-expiration-20220221085105-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:40 UTC | Mon, 21 Feb 2022 08:54:44 UTC | | | cert-expiration-20220221085105-6550 | | | | | | | | --memory=2048 | | | | | | | | --cert-expiration=8760h | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | delete | -p | cert-expiration-20220221085105-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:44 UTC | Mon, 21 Feb 2022 08:54:47 UTC | | | cert-expiration-20220221085105-6550 | | | | | | | start | -p cilium-20220221084934-6550 | cilium-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:33 UTC | Mon, 21 Feb 2022 08:55:10 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=cilium --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p cilium-20220221084934-6550 | cilium-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:55:15 UTC | Mon, 21 Feb 2022 08:55:16 UTC | | | pgrep -a kubelet | | | | | | | delete | -p cilium-20220221084934-6550 | cilium-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:55:29 UTC | Mon, 21 Feb 2022 08:55:33 UTC | | start | -p false-20220221084934-6550 | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:55:33 UTC | Mon, 21 Feb 2022 08:56:15 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=false --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p false-20220221084934-6550 | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:56:15 UTC | Mon, 21 Feb 2022 08:56:16 UTC | | | pgrep -a kubelet | | | | | | | start | -p auto-20220221084933-6550 | auto-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:29 UTC | Mon, 21 Feb 2022 09:01:45 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p auto-20220221084933-6550 | auto-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:01:45 UTC | Mon, 21 Feb 2022 09:01:45 UTC | | | pgrep -a kubelet | | | | | | |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| * * ==> Last Start <== * Log file created at: 2022/02/21 08:55:33 Running on machine: ubuntu-20-agent-5 Binary: Built with gc go1.17.7 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0221 08:55:33.077855 239635 out.go:297] Setting OutFile to fd 1 ... I0221 08:55:33.078244 239635 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 08:55:33.078260 239635 out.go:310] Setting ErrFile to fd 2... I0221 08:55:33.078267 239635 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 08:55:33.078547 239635 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/bin I0221 08:55:33.079122 239635 out.go:304] Setting JSON to false I0221 08:55:33.104574 239635 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2287,"bootTime":1645431446,"procs":1006,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"} I0221 08:55:33.104709 239635 start.go:122] virtualization: kvm guest I0221 08:55:33.107749 239635 out.go:176] * [false-20220221084934-6550] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64) I0221 08:55:33.109511 239635 out.go:176] - MINIKUBE_LOCATION=13641 I0221 08:55:33.108048 239635 notify.go:193] Checking for updates... I0221 08:55:33.111043 239635 out.go:176] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true I0221 08:55:33.112576 239635 out.go:176] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 08:55:33.114627 239635 out.go:176] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube I0221 08:55:33.116118 239635 out.go:176] - MINIKUBE_BIN=out/minikube-linux-amd64 I0221 08:55:33.116659 239635 config.go:176] Loaded profile config "auto-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 08:55:33.116787 239635 config.go:176] Loaded profile config "calico-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 08:55:33.116906 239635 config.go:176] Loaded profile config "custom-weave-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 08:55:33.116975 239635 driver.go:344] Setting default libvirt URI to qemu:///system I0221 08:55:33.167303 239635 docker.go:132] docker version: linux-20.10.12 I0221 08:55:33.167394 239635 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 08:55:33.276263 239635 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2022-02-21 08:55:33.197540287 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 08:55:33.276357 239635 docker.go:237] overlay module found I0221 08:55:33.279678 239635 out.go:176] * Using the docker driver based on user configuration I0221 08:55:33.279708 239635 start.go:281] selected driver: docker I0221 08:55:33.279713 239635 start.go:798] validating driver "docker" against I0221 08:55:33.279735 239635 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} W0221 08:55:33.279796 239635 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. W0221 08:55:33.279816 239635 out.go:241] ! Your cgroup does not allow setting memory. I0221 08:55:33.281318 239635 out.go:176] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities I0221 08:55:33.281928 239635 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 08:55:33.384711 239635 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2022-02-21 08:55:33.318375786 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 08:55:33.384840 239635 start_flags.go:288] no existing cluster config was found, will generate one from the flags I0221 08:55:33.384981 239635 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m I0221 08:55:33.385004 239635 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] I0221 08:55:33.385016 239635 cni.go:93] Creating CNI manager for "false" I0221 08:55:33.385025 239635 start_flags.go:302] config: {Name:false-20220221084934-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:false-20220221084934-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 08:55:33.387333 239635 out.go:176] * Starting control plane node false-20220221084934-6550 in cluster false-20220221084934-6550 I0221 08:55:33.387377 239635 cache.go:120] Beginning downloading kic base image for docker with docker I0221 08:55:33.388617 239635 out.go:176] * Pulling base image ... I0221 08:55:33.388646 239635 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 08:55:33.388678 239635 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4 I0221 08:55:33.388691 239635 cache.go:57] Caching tarball of preloaded images I0221 08:55:33.388734 239635 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon I0221 08:55:33.388928 239635 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0221 08:55:33.388943 239635 cache.go:60] Finished verifying existence of preloaded tar for v1.23.4 on docker I0221 08:55:33.389067 239635 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/config.json ... I0221 08:55:33.389095 239635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/config.json: {Name:mk5e5f0594e41817331267f3d5f1d321ef035e70 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:55:33.445771 239635 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon, skipping pull I0221 08:55:33.445812 239635 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 exists in daemon, skipping load I0221 08:55:33.445833 239635 cache.go:208] Successfully downloaded all kic artifacts I0221 08:55:33.445873 239635 start.go:313] acquiring machines lock for false-20220221084934-6550: {Name:mk2f605a05695ae89fd93473685b8b7565d11497 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 08:55:33.446033 239635 start.go:317] acquired machines lock for "false-20220221084934-6550" in 132.793µs I0221 08:55:33.446064 239635 start.go:89] Provisioning new machine with config: &{Name:false-20220221084934-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:false-20220221084934-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true} I0221 08:55:33.446171 239635 start.go:126] createHost starting for "" (driver="docker") I0221 08:55:31.725027 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:55:32.608422 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:34.608830 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:33.593883 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:35.603309 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:33.448704 239635 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ... I0221 08:55:33.448996 239635 start.go:160] libmachine.API.Create for "false-20220221084934-6550" (driver="docker") I0221 08:55:33.449030 239635 client.go:168] LocalClient.Create starting I0221 08:55:33.449145 239635 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem I0221 08:55:33.449186 239635 main.go:130] libmachine: Decoding PEM data... I0221 08:55:33.449221 239635 main.go:130] libmachine: Parsing certificate... I0221 08:55:33.449284 239635 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem I0221 08:55:33.449304 239635 main.go:130] libmachine: Decoding PEM data... I0221 08:55:33.449317 239635 main.go:130] libmachine: Parsing certificate... I0221 08:55:33.449799 239635 cli_runner.go:133] Run: docker network inspect false-20220221084934-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0221 08:55:33.484614 239635 cli_runner.go:180] docker network inspect false-20220221084934-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0221 08:55:33.484710 239635 network_create.go:254] running [docker network inspect false-20220221084934-6550] to gather additional debugging logs... I0221 08:55:33.484745 239635 cli_runner.go:133] Run: docker network inspect false-20220221084934-6550 W0221 08:55:33.525919 239635 cli_runner.go:180] docker network inspect false-20220221084934-6550 returned with exit code 1 I0221 08:55:33.525955 239635 network_create.go:257] error running [docker network inspect false-20220221084934-6550]: docker network inspect false-20220221084934-6550: exit status 1 stdout: [] stderr: Error: No such network: false-20220221084934-6550 I0221 08:55:33.525971 239635 network_create.go:259] output of [docker network inspect false-20220221084934-6550]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: false-20220221084934-6550 ** /stderr ** I0221 08:55:33.526037 239635 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 08:55:33.567653 239635 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000010890] misses:0} I0221 08:55:33.567721 239635 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}} I0221 08:55:33.567755 239635 network_create.go:106] attempt to create docker network false-20220221084934-6550 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ... I0221 08:55:33.567812 239635 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220221084934-6550 I0221 08:55:33.653745 239635 network_create.go:90] docker network false-20220221084934-6550 192.168.49.0/24 created I0221 08:55:33.653798 239635 kic.go:106] calculated static IP "192.168.49.2" for the "false-20220221084934-6550" container I0221 08:55:33.653860 239635 cli_runner.go:133] Run: docker ps -a --format {{.Names}} I0221 08:55:33.689779 239635 cli_runner.go:133] Run: docker volume create false-20220221084934-6550 --label name.minikube.sigs.k8s.io=false-20220221084934-6550 --label created_by.minikube.sigs.k8s.io=true I0221 08:55:33.735189 239635 oci.go:102] Successfully created a docker volume false-20220221084934-6550 I0221 08:55:33.735273 239635 cli_runner.go:133] Run: docker run --rm --name false-20220221084934-6550-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20220221084934-6550 --entrypoint /usr/bin/test -v false-20220221084934-6550:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -d /var/lib I0221 08:55:34.516202 239635 oci.go:106] Successfully prepared a docker volume false-20220221084934-6550 I0221 08:55:34.516254 239635 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 08:55:34.516274 239635 kic.go:179] Starting extracting preloaded images to volume ... I0221 08:55:34.516349 239635 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-20220221084934-6550:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir I0221 08:55:34.770308 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:55:37.819456 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:55:37.082853 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:39.082914 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:41.084278 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:38.094303 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:40.594209 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:40.648604 239635 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-20220221084934-6550:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir: (6.132215188s) I0221 08:55:40.648640 239635 kic.go:188] duration metric: took 6.132363 seconds to extract preloaded images to volume W0221 08:55:40.648677 239635 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted. W0221 08:55:40.648691 239635 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. I0221 08:55:40.648745 239635 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'" I0221 08:55:40.764575 239635 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname false-20220221084934-6550 --name false-20220221084934-6550 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20220221084934-6550 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=false-20220221084934-6550 --network false-20220221084934-6550 --ip 192.168.49.2 --volume false-20220221084934-6550:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 I0221 08:55:41.202324 239635 cli_runner.go:133] Run: docker container inspect false-20220221084934-6550 --format={{.State.Running}} I0221 08:55:41.240762 239635 cli_runner.go:133] Run: docker container inspect false-20220221084934-6550 --format={{.State.Status}} I0221 08:55:41.276327 239635 cli_runner.go:133] Run: docker exec false-20220221084934-6550 stat /var/lib/dpkg/alternatives/iptables I0221 08:55:41.344357 239635 oci.go:281] the created container "false-20220221084934-6550" has a running status. I0221 08:55:41.344393 239635 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/false-20220221084934-6550/id_rsa... I0221 08:55:41.689215 239635 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/false-20220221084934-6550/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0221 08:55:41.804415 239635 cli_runner.go:133] Run: docker container inspect false-20220221084934-6550 --format={{.State.Status}} I0221 08:55:41.852428 239635 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0221 08:55:41.852455 239635 kic_runner.go:114] Args: [docker exec --privileged false-20220221084934-6550 chown docker:docker /home/docker/.ssh/authorized_keys] I0221 08:55:41.945935 239635 cli_runner.go:133] Run: docker container inspect false-20220221084934-6550 --format={{.State.Status}} I0221 08:55:41.987788 239635 machine.go:88] provisioning docker machine ... I0221 08:55:41.987822 239635 ubuntu.go:169] provisioning hostname "false-20220221084934-6550" I0221 08:55:41.987877 239635 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220221084934-6550 I0221 08:55:42.028280 239635 main.go:130] libmachine: Using SSH client type: native I0221 08:55:42.028524 239635 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49374 } I0221 08:55:42.028549 239635 main.go:130] libmachine: About to run SSH command: sudo hostname false-20220221084934-6550 && echo "false-20220221084934-6550" | sudo tee /etc/hostname I0221 08:55:42.166605 239635 main.go:130] libmachine: SSH cmd err, output: : false-20220221084934-6550 I0221 08:55:42.166723 239635 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220221084934-6550 I0221 08:55:42.200507 239635 main.go:130] libmachine: Using SSH client type: native I0221 08:55:42.200766 239635 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49374 } I0221 08:55:42.200798 239635 main.go:130] libmachine: About to run SSH command: if ! grep -xq '.*\sfalse-20220221084934-6550' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 false-20220221084934-6550/g' /etc/hosts; else echo '127.0.1.1 false-20220221084934-6550' | sudo tee -a /etc/hosts; fi fi I0221 08:55:42.331358 239635 main.go:130] libmachine: SSH cmd err, output: : I0221 08:55:42.331392 239635 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube} I0221 08:55:42.331422 239635 ubuntu.go:177] setting up certificates I0221 08:55:42.331432 239635 provision.go:83] configureAuth start I0221 08:55:42.331488 239635 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-20220221084934-6550 I0221 08:55:42.370135 239635 provision.go:138] copyHostCerts I0221 08:55:42.370196 239635 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem, removing ... I0221 08:55:42.370203 239635 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem I0221 08:55:42.370259 239635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem (1082 bytes) I0221 08:55:42.370365 239635 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem, removing ... I0221 08:55:42.370382 239635 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem I0221 08:55:42.370415 239635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem (1123 bytes) I0221 08:55:42.370470 239635 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem, removing ... I0221 08:55:42.370481 239635 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem I0221 08:55:42.370500 239635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem (1675 bytes) I0221 08:55:42.370567 239635 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem org=jenkins.false-20220221084934-6550 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube false-20220221084934-6550] I0221 08:55:42.479623 239635 provision.go:172] copyRemoteCerts I0221 08:55:42.479692 239635 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0221 08:55:42.479733 239635 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220221084934-6550 I0221 08:55:42.515815 239635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49374 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/false-20220221084934-6550/id_rsa Username:docker} I0221 08:55:42.602649 239635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes) I0221 08:55:42.621359 239635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes) I0221 08:55:42.640061 239635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I0221 08:55:42.658008 239635 provision.go:86] duration metric: configureAuth took 326.566164ms I0221 08:55:42.658036 239635 ubuntu.go:193] setting minikube options for container-runtime I0221 08:55:42.658187 239635 config.go:176] Loaded profile config "false-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 08:55:42.658226 239635 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220221084934-6550 I0221 08:55:42.693609 239635 main.go:130] libmachine: Using SSH client type: native I0221 08:55:42.693748 239635 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49374 } I0221 08:55:42.693763 239635 main.go:130] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0221 08:55:42.827656 239635 main.go:130] libmachine: SSH cmd err, output: : overlay I0221 08:55:42.827684 239635 ubuntu.go:71] root file system type: overlay I0221 08:55:42.827880 239635 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0221 08:55:42.827961 239635 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220221084934-6550 I0221 08:55:42.869428 239635 main.go:130] libmachine: Using SSH client type: native I0221 08:55:42.869587 239635 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49374 } I0221 08:55:42.869645 239635 main.go:130] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0221 08:55:43.004737 239635 main.go:130] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0221 08:55:43.004838 239635 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220221084934-6550 I0221 08:55:40.855138 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:55:43.899128 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:55:43.042605 239635 main.go:130] libmachine: Using SSH client type: native I0221 08:55:43.042768 239635 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49374 } I0221 08:55:43.042815 239635 main.go:130] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0221 08:55:43.849500 239635 main.go:130] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2021-12-13 11:43:42.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2022-02-21 08:55:43.002186410 +0000 @@ -1,30 +1,32 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com +BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target -Requires=docker.socket containerd.service +Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always - -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 +Restart=on-failure -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +34,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0221 08:55:43.849532 239635 machine.go:91] provisioned docker machine in 1.86172323s I0221 08:55:43.849543 239635 client.go:171] LocalClient.Create took 10.400507664s I0221 08:55:43.849553 239635 start.go:168] duration metric: libmachine.API.Create for "false-20220221084934-6550" took 10.400558541s I0221 08:55:43.849560 239635 start.go:267] post-start starting for "false-20220221084934-6550" (driver="docker") I0221 08:55:43.849565 239635 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0221 08:55:43.849623 239635 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0221 08:55:43.849660 239635 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220221084934-6550 I0221 08:55:43.883791 239635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49374 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/false-20220221084934-6550/id_rsa Username:docker} I0221 08:55:43.975048 239635 ssh_runner.go:195] Run: cat /etc/os-release I0221 08:55:43.977983 239635 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0221 08:55:43.978007 239635 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0221 08:55:43.978016 239635 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0221 08:55:43.978020 239635 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0221 08:55:43.978030 239635 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/addons for local assets ... I0221 08:55:43.978083 239635 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files for local assets ... I0221 08:55:43.978146 239635 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem -> 65502.pem in /etc/ssl/certs I0221 08:55:43.978220 239635 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs I0221 08:55:43.985681 239635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /etc/ssl/certs/65502.pem (1708 bytes) I0221 08:55:44.006873 239635 start.go:270] post-start completed in 157.29813ms I0221 08:55:44.007350 239635 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-20220221084934-6550 I0221 08:55:44.048382 239635 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/config.json ... I0221 08:55:44.048631 239635 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0221 08:55:44.048678 239635 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220221084934-6550 I0221 08:55:44.082129 239635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49374 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/false-20220221084934-6550/id_rsa Username:docker} I0221 08:55:44.171620 239635 start.go:129] duration metric: createHost completed in 10.725436331s I0221 08:55:44.171655 239635 start.go:80] releasing machines lock for "false-20220221084934-6550", held for 10.725604036s I0221 08:55:44.171749 239635 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-20220221084934-6550 I0221 08:55:44.208115 239635 ssh_runner.go:195] Run: systemctl --version I0221 08:55:44.208167 239635 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220221084934-6550 I0221 08:55:44.208180 239635 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/ I0221 08:55:44.208237 239635 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220221084934-6550 I0221 08:55:44.251474 239635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49374 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/false-20220221084934-6550/id_rsa Username:docker} I0221 08:55:44.257426 239635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49374 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/false-20220221084934-6550/id_rsa Username:docker} I0221 08:55:44.486291 239635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0221 08:55:44.496111 239635 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 08:55:44.506776 239635 cruntime.go:272] skipping containerd shutdown because we are bound to it I0221 08:55:44.506843 239635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0221 08:55:44.520408 239635 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0221 08:55:44.537069 239635 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0221 08:55:44.627356 239635 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0221 08:55:44.721100 239635 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 08:55:44.732712 239635 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0221 08:55:44.818414 239635 ssh_runner.go:195] Run: sudo systemctl start docker I0221 08:55:44.830044 239635 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 08:55:44.877999 239635 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 08:55:44.924321 239635 out.go:203] * Preparing Kubernetes v1.23.4 on Docker 20.10.12 ... I0221 08:55:44.924423 239635 cli_runner.go:133] Run: docker network inspect false-20220221084934-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 08:55:44.958433 239635 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts I0221 08:55:44.961770 239635 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 08:55:44.973175 239635 out.go:176] - kubelet.housekeeping-interval=5m I0221 08:55:44.973239 239635 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 08:55:44.973284 239635 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0221 08:55:45.006776 239635 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.4 k8s.gcr.io/kube-proxy:v1.23.4 k8s.gcr.io/kube-controller-manager:v1.23.4 k8s.gcr.io/kube-scheduler:v1.23.4 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0221 08:55:45.006797 239635 docker.go:537] Images already preloaded, skipping extraction I0221 08:55:45.006840 239635 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0221 08:55:45.041682 239635 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.4 k8s.gcr.io/kube-proxy:v1.23.4 k8s.gcr.io/kube-scheduler:v1.23.4 k8s.gcr.io/kube-controller-manager:v1.23.4 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0221 08:55:45.041706 239635 cache_images.go:84] Images are preloaded, skipping loading I0221 08:55:45.041748 239635 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0221 08:55:45.126963 239635 cni.go:93] Creating CNI manager for "false" I0221 08:55:45.127028 239635 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0221 08:55:45.127049 239635 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:false-20220221084934-6550 NodeName:false-20220221084934-6550 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0221 08:55:45.127209 239635 kubeadm.go:162] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "false-20220221084934-6550" kubeletExtraArgs: node-ip: 192.168.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.23.4 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0221 08:55:45.127323 239635 kubeadm.go:936] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.23.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=false-20220221084934-6550 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 [Install] config: {KubernetesVersion:v1.23.4 ClusterName:false-20220221084934-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} I0221 08:55:45.127402 239635 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.4 I0221 08:55:45.134691 239635 binaries.go:44] Found k8s binaries, skipping transfer I0221 08:55:45.134765 239635 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0221 08:55:45.142486 239635 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes) I0221 08:55:45.155729 239635 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0221 08:55:45.169105 239635 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2047 bytes) I0221 08:55:45.182350 239635 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0221 08:55:45.185390 239635 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 08:55:45.195280 239635 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550 for IP: 192.168.49.2 I0221 08:55:45.195372 239635 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key I0221 08:55:45.195409 239635 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key I0221 08:55:45.195467 239635 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.key I0221 08:55:45.195482 239635 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt with IP's: [] I0221 08:55:45.464966 239635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt ... I0221 08:55:45.465001 239635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: {Name:mkdc6c86a484bb695bb258b5feb6185d1eb29a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:55:45.465219 239635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.key ... I0221 08:55:45.465236 239635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.key: {Name:mk5469692e52021cfcc273116a99298fde294eb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:55:45.465326 239635 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/apiserver.key.dd3b5fb2 I0221 08:55:45.465343 239635 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1] I0221 08:55:45.532099 239635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/apiserver.crt.dd3b5fb2 ... I0221 08:55:45.532131 239635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/apiserver.crt.dd3b5fb2: {Name:mk4f9bddf2d8f47495bb7872a93a24c91d949bce Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:55:45.532299 239635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/apiserver.key.dd3b5fb2 ... I0221 08:55:45.532312 239635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/apiserver.key.dd3b5fb2: {Name:mk1d78c9de9f8825620a84ece39b31abc4ec2d53 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:55:45.532389 239635 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/apiserver.crt I0221 08:55:45.532442 239635 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/apiserver.key I0221 08:55:45.532496 239635 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/proxy-client.key I0221 08:55:45.532509 239635 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/proxy-client.crt with IP's: [] I0221 08:55:45.611100 239635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/proxy-client.crt ... I0221 08:55:45.611129 239635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/proxy-client.crt: {Name:mkbcc271b992841575783c9f82cdc99f41db88f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:55:45.611296 239635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/proxy-client.key ... I0221 08:55:45.611310 239635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/proxy-client.key: {Name:mkfdc42be1b4c25c7176716ce7c9f5ed6c0ed3ea Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:55:45.611463 239635 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem (1338 bytes) W0221 08:55:45.611499 239635 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550_empty.pem, impossibly tiny 0 bytes I0221 08:55:45.611506 239635 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem (1679 bytes) I0221 08:55:45.611532 239635 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem (1082 bytes) I0221 08:55:45.611557 239635 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem (1123 bytes) I0221 08:55:45.611581 239635 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem (1675 bytes) I0221 08:55:45.611628 239635 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem (1708 bytes) I0221 08:55:45.612674 239635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0221 08:55:45.631353 239635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes) I0221 08:55:45.648917 239635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0221 08:55:45.666591 239635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I0221 08:55:45.684159 239635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0221 08:55:45.701818 239635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0221 08:55:45.719473 239635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0221 08:55:45.737014 239635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0221 08:55:45.754513 239635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /usr/share/ca-certificates/65502.pem (1708 bytes) I0221 08:55:45.772036 239635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0221 08:55:45.789829 239635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem --> /usr/share/ca-certificates/6550.pem (1338 bytes) I0221 08:55:45.807112 239635 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0221 08:55:45.819894 239635 ssh_runner.go:195] Run: openssl version I0221 08:55:45.824776 239635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65502.pem && ln -fs /usr/share/ca-certificates/65502.pem /etc/ssl/certs/65502.pem" I0221 08:55:45.832744 239635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65502.pem I0221 08:55:45.835837 239635 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Feb 21 08:30 /usr/share/ca-certificates/65502.pem I0221 08:55:45.835890 239635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65502.pem I0221 08:55:45.840887 239635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65502.pem /etc/ssl/certs/3ec20f2e.0" I0221 08:55:45.848695 239635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0221 08:55:45.856224 239635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0221 08:55:45.859423 239635 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb 21 08:26 /usr/share/ca-certificates/minikubeCA.pem I0221 08:55:45.859475 239635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0221 08:55:45.864292 239635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0221 08:55:45.871752 239635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6550.pem && ln -fs /usr/share/ca-certificates/6550.pem /etc/ssl/certs/6550.pem" I0221 08:55:45.878978 239635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6550.pem I0221 08:55:45.881994 239635 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Feb 21 08:30 /usr/share/ca-certificates/6550.pem I0221 08:55:45.882039 239635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6550.pem I0221 08:55:45.886887 239635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6550.pem /etc/ssl/certs/51391683.0" I0221 08:55:45.894360 239635 kubeadm.go:391] StartCluster: {Name:false-20220221084934-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:false-20220221084934-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 08:55:45.894481 239635 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0221 08:55:45.926642 239635 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0221 08:55:45.933993 239635 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0221 08:55:45.941434 239635 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver I0221 08:55:45.941488 239635 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0221 08:55:45.949175 239635 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0221 08:55:45.949230 239635 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0221 08:55:43.583801 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:46.104227 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:43.094975 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:45.594422 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:46.578129 239635 out.go:203] - Generating certificates and keys ... I0221 08:55:46.949101 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:55:48.608316 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:51.082452 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:48.094138 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:50.094339 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:49.236357 239635 out.go:203] - Booting up control plane ... I0221 08:55:49.985044 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:55:53.019690 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:55:53.082812 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:55.604982 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:52.593954 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:55.094041 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:57.094158 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:57.284223 239635 out.go:203] - Configuring RBAC rules ... I0221 08:55:57.697593 239635 cni.go:93] Creating CNI manager for "false" I0221 08:55:57.697659 239635 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0221 08:55:57.697727 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:57.697758 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=9a4c39b14752448ee8b5da7f8bb397c2a16d9ea9 minikube.k8s.io/name=false-20220221084934-6550 minikube.k8s.io/updated_at=2022_02_21T08_55_57_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:57.827880 239635 ops.go:34] apiserver oom_adj: -16 I0221 08:55:57.827972 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:56.060744 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:55:59.104251 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:55:58.083480 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:00.107900 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:59.594464 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:01.594499 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:58.858347 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:59.358534 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:59.858264 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:00.358933 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:00.857953 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:01.358649 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:01.858853 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:02.358809 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:02.858068 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:02.144689 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:56:02.108600 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:04.109005 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:06.608183 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:03.595044 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:06.096228 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:03.358732 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:03.858149 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:04.358501 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:04.858946 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:05.358259 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:05.858632 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:06.358646 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:06.858739 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:07.358889 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:07.858256 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:05.187189 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:56:08.234700 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:56:08.358051 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:08.858255 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:09.358060 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:09.858891 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:10.358357 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:10.858913 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:10.944981 239635 kubeadm.go:1020] duration metric: took 13.247304119s to wait for elevateKubeSystemPrivileges. I0221 08:56:10.945020 239635 kubeadm.go:393] StartCluster complete in 25.050665602s I0221 08:56:10.945040 239635 settings.go:142] acquiring lock: {Name:mk4400923ef35d7d80e21aa000bc7683aef0fb09 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:56:10.945157 239635 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 08:56:10.947338 239635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig: {Name:mkbaf7b3f4ffbe8a9d57707d423380523fa909e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:56:11.510362 239635 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "false-20220221084934-6550" rescaled to 1 I0221 08:56:11.510417 239635 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true} I0221 08:56:11.512591 239635 out.go:176] * Verifying Kubernetes components... I0221 08:56:11.510487 239635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0221 08:56:11.510508 239635 addons.go:415] enableAddons start: toEnable=map[], additional=[] I0221 08:56:11.512794 239635 addons.go:65] Setting storage-provisioner=true in profile "false-20220221084934-6550" I0221 08:56:11.512817 239635 addons.go:153] Setting addon storage-provisioner=true in "false-20220221084934-6550" W0221 08:56:11.512823 239635 addons.go:165] addon storage-provisioner should already be in state true I0221 08:56:11.510662 239635 config.go:176] Loaded profile config "false-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 08:56:11.512851 239635 host.go:66] Checking if "false-20220221084934-6550" exists ... I0221 08:56:11.512662 239635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0221 08:56:11.512894 239635 addons.go:65] Setting default-storageclass=true in profile "false-20220221084934-6550" I0221 08:56:11.512913 239635 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "false-20220221084934-6550" I0221 08:56:11.513247 239635 cli_runner.go:133] Run: docker container inspect false-20220221084934-6550 --format={{.State.Status}} I0221 08:56:11.513326 239635 cli_runner.go:133] Run: docker container inspect false-20220221084934-6550 --format={{.State.Status}} I0221 08:56:11.545075 239635 node_ready.go:35] waiting up to 5m0s for node "false-20220221084934-6550" to be "Ready" ... I0221 08:56:11.551425 239635 node_ready.go:49] node "false-20220221084934-6550" has status "Ready":"True" I0221 08:56:11.551454 239635 node_ready.go:38] duration metric: took 6.346806ms waiting for node "false-20220221084934-6550" to be "Ready" ... I0221 08:56:11.551465 239635 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 08:56:09.083257 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:11.584369 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:08.594008 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:10.594274 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:11.571730 239635 out.go:176] - Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0221 08:56:11.571897 239635 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml I0221 08:56:11.571919 239635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0221 08:56:11.571975 239635 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220221084934-6550 I0221 08:56:11.575024 239635 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-9k8b6" in "kube-system" namespace to be "Ready" ... I0221 08:56:11.605352 239635 addons.go:153] Setting addon default-storageclass=true in "false-20220221084934-6550" W0221 08:56:11.605400 239635 addons.go:165] addon default-storageclass should already be in state true I0221 08:56:11.605431 239635 host.go:66] Checking if "false-20220221084934-6550" exists ... I0221 08:56:11.605905 239635 cli_runner.go:133] Run: docker container inspect false-20220221084934-6550 --format={{.State.Status}} I0221 08:56:11.649737 239635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49374 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/false-20220221084934-6550/id_rsa Username:docker} I0221 08:56:11.670390 239635 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml I0221 08:56:11.670419 239635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0221 08:56:11.670473 239635 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220221084934-6550 I0221 08:56:11.714040 239635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49374 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/false-20220221084934-6550/id_rsa Username:docker} I0221 08:56:11.731413 239635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0221 08:56:11.935089 239635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0221 08:56:11.937307 239635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0221 08:56:13.219947 239635 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.48849471s) I0221 08:56:13.219982 239635 start.go:777] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS I0221 08:56:13.307245 239635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.372085112s) I0221 08:56:13.307328 239635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.369989228s) I0221 08:56:11.277437 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:56:13.309383 239635 out.go:176] * Enabled addons: storage-provisioner, default-storageclass I0221 08:56:13.309406 239635 addons.go:417] enableAddons completed in 1.798918883s I0221 08:56:13.620775 239635 pod_ready.go:102] pod "coredns-64897985d-9k8b6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:14.119861 239635 pod_ready.go:92] pod "coredns-64897985d-9k8b6" in "kube-system" namespace has status "Ready":"True" I0221 08:56:14.119953 239635 pod_ready.go:81] duration metric: took 2.544859967s waiting for pod "coredns-64897985d-9k8b6" in "kube-system" namespace to be "Ready" ... I0221 08:56:14.119988 239635 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-snkv2" in "kube-system" namespace to be "Ready" ... I0221 08:56:14.125349 239635 pod_ready.go:92] pod "coredns-64897985d-snkv2" in "kube-system" namespace has status "Ready":"True" I0221 08:56:14.125379 239635 pod_ready.go:81] duration metric: took 5.372629ms waiting for pod "coredns-64897985d-snkv2" in "kube-system" namespace to be "Ready" ... I0221 08:56:14.125392 239635 pod_ready.go:78] waiting up to 5m0s for pod "etcd-false-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:56:14.130521 239635 pod_ready.go:92] pod "etcd-false-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 08:56:14.130548 239635 pod_ready.go:81] duration metric: took 5.148117ms waiting for pod "etcd-false-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:56:14.130573 239635 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-false-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:56:14.136032 239635 pod_ready.go:92] pod "kube-apiserver-false-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 08:56:14.136057 239635 pod_ready.go:81] duration metric: took 5.474386ms waiting for pod "kube-apiserver-false-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:56:14.136070 239635 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-false-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:56:14.141889 239635 pod_ready.go:92] pod "kube-controller-manager-false-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 08:56:14.141912 239635 pod_ready.go:81] duration metric: took 5.834181ms waiting for pod "kube-controller-manager-false-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:56:14.141931 239635 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-mlfhq" in "kube-system" namespace to be "Ready" ... I0221 08:56:14.517663 239635 pod_ready.go:92] pod "kube-proxy-mlfhq" in "kube-system" namespace has status "Ready":"True" I0221 08:56:14.517685 239635 pod_ready.go:81] duration metric: took 375.747262ms waiting for pod "kube-proxy-mlfhq" in "kube-system" namespace to be "Ready" ... I0221 08:56:14.517694 239635 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-false-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:56:14.917218 239635 pod_ready.go:92] pod "kube-scheduler-false-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 08:56:14.917240 239635 pod_ready.go:81] duration metric: took 399.540555ms waiting for pod "kube-scheduler-false-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:56:14.917249 239635 pod_ready.go:38] duration metric: took 3.365771221s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 08:56:14.917273 239635 api_server.go:51] waiting for apiserver process to appear ... I0221 08:56:14.917314 239635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0221 08:56:14.945692 239635 api_server.go:71] duration metric: took 3.435246847s to wait for apiserver process to appear ... I0221 08:56:14.945777 239635 api_server.go:87] waiting for apiserver healthz status ... I0221 08:56:14.945798 239635 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0221 08:56:14.951742 239635 api_server.go:266] https://192.168.49.2:8443/healthz returned 200: ok I0221 08:56:14.952909 239635 api_server.go:140] control plane version: v1.23.4 I0221 08:56:14.952937 239635 api_server.go:130] duration metric: took 7.147051ms to wait for apiserver health ... I0221 08:56:14.952948 239635 system_pods.go:43] waiting for kube-system pods to appear ... I0221 08:56:15.120310 239635 system_pods.go:59] 8 kube-system pods found I0221 08:56:15.120352 239635 system_pods.go:61] "coredns-64897985d-9k8b6" [7231ddf1-a325-4916-8188-6516121331ce] Running I0221 08:56:15.120358 239635 system_pods.go:61] "coredns-64897985d-snkv2" [2ca2a7a8-2903-47ca-bcf3-097175f8bc79] Running I0221 08:56:15.120364 239635 system_pods.go:61] "etcd-false-20220221084934-6550" [85157cb6-493b-47f3-a078-9c7f3086c0ae] Running I0221 08:56:15.120373 239635 system_pods.go:61] "kube-apiserver-false-20220221084934-6550" [bd7518d6-e2db-4f22-9f37-fa5831613936] Running I0221 08:56:15.120380 239635 system_pods.go:61] "kube-controller-manager-false-20220221084934-6550" [0bca9a27-5e63-4cd7-8c81-e56c354e24da] Running I0221 08:56:15.120389 239635 system_pods.go:61] "kube-proxy-mlfhq" [b1256bd2-9a7f-4f1f-861d-1eedacb992be] Running I0221 08:56:15.120395 239635 system_pods.go:61] "kube-scheduler-false-20220221084934-6550" [4f15dbe8-f5f0-4895-a7a2-ca7d40a0e148] Running I0221 08:56:15.120408 239635 system_pods.go:61] "storage-provisioner" [e58a0e76-397e-4653-82c8-a63621513203] Running I0221 08:56:15.120414 239635 system_pods.go:74] duration metric: took 167.460543ms to wait for pod list to return data ... I0221 08:56:15.120423 239635 default_sa.go:34] waiting for default service account to be created ... I0221 08:56:15.317796 239635 default_sa.go:45] found service account: "default" I0221 08:56:15.317819 239635 default_sa.go:55] duration metric: took 197.391118ms for default service account to be created ... I0221 08:56:15.317826 239635 system_pods.go:116] waiting for k8s-apps to be running ... I0221 08:56:15.520020 239635 system_pods.go:86] 8 kube-system pods found I0221 08:56:15.520058 239635 system_pods.go:89] "coredns-64897985d-9k8b6" [7231ddf1-a325-4916-8188-6516121331ce] Running I0221 08:56:15.520067 239635 system_pods.go:89] "coredns-64897985d-snkv2" [2ca2a7a8-2903-47ca-bcf3-097175f8bc79] Running I0221 08:56:15.520073 239635 system_pods.go:89] "etcd-false-20220221084934-6550" [85157cb6-493b-47f3-a078-9c7f3086c0ae] Running I0221 08:56:15.520080 239635 system_pods.go:89] "kube-apiserver-false-20220221084934-6550" [bd7518d6-e2db-4f22-9f37-fa5831613936] Running I0221 08:56:15.520088 239635 system_pods.go:89] "kube-controller-manager-false-20220221084934-6550" [0bca9a27-5e63-4cd7-8c81-e56c354e24da] Running I0221 08:56:15.520099 239635 system_pods.go:89] "kube-proxy-mlfhq" [b1256bd2-9a7f-4f1f-861d-1eedacb992be] Running I0221 08:56:15.520110 239635 system_pods.go:89] "kube-scheduler-false-20220221084934-6550" [4f15dbe8-f5f0-4895-a7a2-ca7d40a0e148] Running I0221 08:56:15.520121 239635 system_pods.go:89] "storage-provisioner" [e58a0e76-397e-4653-82c8-a63621513203] Running I0221 08:56:15.520133 239635 system_pods.go:126] duration metric: took 202.301442ms to wait for k8s-apps to be running ... I0221 08:56:15.520146 239635 system_svc.go:44] waiting for kubelet service to be running .... I0221 08:56:15.520194 239635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0221 08:56:15.531798 239635 system_svc.go:56] duration metric: took 11.647388ms WaitForService to wait for kubelet. I0221 08:56:15.531849 239635 kubeadm.go:548] duration metric: took 4.021409458s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ... I0221 08:56:15.531874 239635 node_conditions.go:102] verifying NodePressure condition ... I0221 08:56:15.718468 239635 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki I0221 08:56:15.718504 239635 node_conditions.go:123] node cpu capacity is 8 I0221 08:56:15.718520 239635 node_conditions.go:105] duration metric: took 186.636719ms to run NodePressure ... I0221 08:56:15.718534 239635 start.go:213] waiting for startup goroutines ... I0221 08:56:15.765073 239635 start.go:496] kubectl: 1.23.4, cluster: 1.23.4 (minor skew: 0) I0221 08:56:15.767772 239635 out.go:176] * Done! kubectl is now configured to use "false-20220221084934-6550" cluster and "default" namespace by default I0221 08:56:13.603328 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:15.607461 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:12.594837 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:15.094474 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:17.095174 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:14.327798 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:56:17.363160 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:56:17.608185 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:20.103368 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:19.595203 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:22.094022 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:20.398233 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:56:23.439137 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:56:22.106959 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:24.109509 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:26.606973 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:24.094532 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:26.594351 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:26.477627 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:56:28.607609 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:31.082276 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:29.094290 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:31.595545 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:29.516965 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:56:32.555149 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:56:33.107320 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:35.583226 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:34.094168 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:36.094581 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:35.592238 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:56:38.627148 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:56:38.107435 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:40.606736 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:38.593443 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:40.593849 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:41.663147 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:56:43.082434 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:45.107171 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:42.594084 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:44.594768 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:47.093943 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:44.699280 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:56:47.744968 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:56:47.583447 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:49.608204 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:51.608560 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:49.593364 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:51.593995 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:50.783109 208829 stop.go:59] stop err: Maximum number of retries (60) exceeded I0221 08:56:50.783155 208829 delete.go:129] stophost failed (probably ok): Temporary Error: stop: Maximum number of retries (60) exceeded I0221 08:56:50.783567 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} W0221 08:56:50.818323 208829 delete.go:135] deletehost failed: Docker machine "auto-20220221084933-6550" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one. I0221 08:56:50.818397 208829 cli_runner.go:133] Run: docker container inspect -f {{.Id}} auto-20220221084933-6550 I0221 08:56:50.852482 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:56:50.885015 208829 cli_runner.go:133] Run: docker exec --privileged -t auto-20220221084933-6550 /bin/bash -c "sudo init 0" W0221 08:56:50.919078 208829 cli_runner.go:180] docker exec --privileged -t auto-20220221084933-6550 /bin/bash -c "sudo init 0" returned with exit code 1 I0221 08:56:50.919109 208829 oci.go:659] error shutdown auto-20220221084933-6550: docker exec --privileged -t auto-20220221084933-6550 /bin/bash -c "sudo init 0": exit status 1 stdout: stderr: Error response from daemon: Container 00857a088a82e39c05eb12c3d7fa364b17041e9ecbb348b20a1e952ed4c1fb54 is not running I0221 08:56:51.920214 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:56:51.954544 208829 oci.go:673] temporary error: container auto-20220221084933-6550 status is but expect it to be exited I0221 08:56:51.954575 208829 oci.go:679] Successfully shutdown container auto-20220221084933-6550 I0221 08:56:51.954633 208829 cli_runner.go:133] Run: docker rm -f -v auto-20220221084933-6550 I0221 08:56:51.995652 208829 cli_runner.go:133] Run: docker container inspect -f {{.Id}} auto-20220221084933-6550 W0221 08:56:52.030780 208829 cli_runner.go:180] docker container inspect -f {{.Id}} auto-20220221084933-6550 returned with exit code 1 I0221 08:56:52.030857 208829 cli_runner.go:133] Run: docker network inspect auto-20220221084933-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0221 08:56:52.064402 208829 cli_runner.go:180] docker network inspect auto-20220221084933-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0221 08:56:52.064463 208829 network_create.go:254] running [docker network inspect auto-20220221084933-6550] to gather additional debugging logs... I0221 08:56:52.064477 208829 cli_runner.go:133] Run: docker network inspect auto-20220221084933-6550 W0221 08:56:52.098766 208829 cli_runner.go:180] docker network inspect auto-20220221084933-6550 returned with exit code 1 I0221 08:56:52.098796 208829 network_create.go:257] error running [docker network inspect auto-20220221084933-6550]: docker network inspect auto-20220221084933-6550: exit status 1 stdout: [] stderr: Error: No such network: auto-20220221084933-6550 I0221 08:56:52.098812 208829 network_create.go:259] output of [docker network inspect auto-20220221084933-6550]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: auto-20220221084933-6550 ** /stderr ** W0221 08:56:52.098950 208829 delete.go:139] delete failed (probably ok) I0221 08:56:52.098962 208829 fix.go:120] Sleeping 1 second for extra luck! I0221 08:56:53.099096 208829 start.go:126] createHost starting for "" (driver="docker") I0221 08:56:53.102600 208829 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ... I0221 08:56:53.102747 208829 start.go:160] libmachine.API.Create for "auto-20220221084933-6550" (driver="docker") I0221 08:56:53.102794 208829 client.go:168] LocalClient.Create starting I0221 08:56:53.102899 208829 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem I0221 08:56:53.102945 208829 main.go:130] libmachine: Decoding PEM data... I0221 08:56:53.102970 208829 main.go:130] libmachine: Parsing certificate... I0221 08:56:53.103057 208829 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem I0221 08:56:53.103082 208829 main.go:130] libmachine: Decoding PEM data... I0221 08:56:53.103096 208829 main.go:130] libmachine: Parsing certificate... I0221 08:56:53.103314 208829 cli_runner.go:133] Run: docker network inspect auto-20220221084933-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0221 08:56:53.136740 208829 cli_runner.go:180] docker network inspect auto-20220221084933-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0221 08:56:53.136805 208829 network_create.go:254] running [docker network inspect auto-20220221084933-6550] to gather additional debugging logs... I0221 08:56:53.136820 208829 cli_runner.go:133] Run: docker network inspect auto-20220221084933-6550 W0221 08:56:53.169853 208829 cli_runner.go:180] docker network inspect auto-20220221084933-6550 returned with exit code 1 I0221 08:56:53.169885 208829 network_create.go:257] error running [docker network inspect auto-20220221084933-6550]: docker network inspect auto-20220221084933-6550: exit status 1 stdout: [] stderr: Error: No such network: auto-20220221084933-6550 I0221 08:56:53.169901 208829 network_create.go:259] output of [docker network inspect auto-20220221084933-6550]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: auto-20220221084933-6550 ** /stderr ** I0221 08:56:53.169943 208829 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 08:56:53.204718 208829 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-3aad4971443d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:75:24:60:d8}} I0221 08:56:53.205609 208829 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-8f04c0f799cd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:40:4a:89:16}} I0221 08:56:53.206406 208829 network.go:240] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-259ea390e559 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:d1:27:54:57}} I0221 08:56:53.207351 208829 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc0002b64c0 192.168.76.0:0xc0002b6468] misses:0} I0221 08:56:53.207411 208829 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}} I0221 08:56:53.207422 208829 network_create.go:106] attempt to create docker network auto-20220221084933-6550 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ... I0221 08:56:53.207482 208829 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220221084933-6550 I0221 08:56:53.282938 208829 network_create.go:90] docker network auto-20220221084933-6550 192.168.76.0/24 created I0221 08:56:53.282974 208829 kic.go:106] calculated static IP "192.168.76.2" for the "auto-20220221084933-6550" container I0221 08:56:53.283110 208829 cli_runner.go:133] Run: docker ps -a --format {{.Names}} I0221 08:56:53.324582 208829 cli_runner.go:133] Run: docker volume create auto-20220221084933-6550 --label name.minikube.sigs.k8s.io=auto-20220221084933-6550 --label created_by.minikube.sigs.k8s.io=true I0221 08:56:53.368627 208829 oci.go:102] Successfully created a docker volume auto-20220221084933-6550 I0221 08:56:53.368710 208829 cli_runner.go:133] Run: docker run --rm --name auto-20220221084933-6550-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220221084933-6550 --entrypoint /usr/bin/test -v auto-20220221084933-6550:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -d /var/lib I0221 08:56:53.890381 208829 oci.go:106] Successfully prepared a docker volume auto-20220221084933-6550 I0221 08:56:53.890421 208829 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 08:56:53.890441 208829 kic.go:179] Starting extracting preloaded images to volume ... I0221 08:56:53.890510 208829 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20220221084933-6550:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir I0221 08:56:54.108380 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:56.583351 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:53.594291 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:55.594982 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:59.083417 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:01.108902 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:57.595281 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:00.095968 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:59.757141 208829 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20220221084933-6550:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir: (5.866593931s) I0221 08:56:59.757171 208829 kic.go:188] duration metric: took 5.866728 seconds to extract preloaded images to volume W0221 08:56:59.757217 208829 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted. W0221 08:56:59.757234 208829 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. I0221 08:56:59.757273 208829 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'" I0221 08:56:59.893338 208829 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20220221084933-6550 --name auto-20220221084933-6550 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220221084933-6550 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20220221084933-6550 --network auto-20220221084933-6550 --ip 192.168.76.2 --volume auto-20220221084933-6550:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 I0221 08:57:00.408893 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Running}} I0221 08:57:00.450206 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:57:00.509150 208829 cli_runner.go:133] Run: docker exec auto-20220221084933-6550 stat /var/lib/dpkg/alternatives/iptables I0221 08:57:00.577744 208829 oci.go:281] the created container "auto-20220221084933-6550" has a running status. I0221 08:57:00.577773 208829 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/auto-20220221084933-6550/id_rsa... I0221 08:57:00.682193 208829 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/auto-20220221084933-6550/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0221 08:57:00.791460 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:57:00.836336 208829 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0221 08:57:00.836364 208829 kic_runner.go:114] Args: [docker exec --privileged auto-20220221084933-6550 chown docker:docker /home/docker/.ssh/authorized_keys] I0221 08:57:00.939165 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:57:00.987113 208829 machine.go:88] provisioning docker machine ... I0221 08:57:00.987158 208829 ubuntu.go:169] provisioning hostname "auto-20220221084933-6550" I0221 08:57:00.987220 208829 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220221084933-6550 I0221 08:57:01.032031 208829 main.go:130] libmachine: Using SSH client type: native I0221 08:57:01.032362 208829 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49379 } I0221 08:57:01.032391 208829 main.go:130] libmachine: About to run SSH command: sudo hostname auto-20220221084933-6550 && echo "auto-20220221084933-6550" | sudo tee /etc/hostname I0221 08:57:01.178093 208829 main.go:130] libmachine: SSH cmd err, output: : auto-20220221084933-6550 I0221 08:57:01.178171 208829 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220221084933-6550 I0221 08:57:01.217179 208829 main.go:130] libmachine: Using SSH client type: native I0221 08:57:01.217336 208829 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49379 } I0221 08:57:01.217356 208829 main.go:130] libmachine: About to run SSH command: if ! grep -xq '.*\sauto-20220221084933-6550' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-20220221084933-6550/g' /etc/hosts; else echo '127.0.1.1 auto-20220221084933-6550' | sudo tee -a /etc/hosts; fi fi I0221 08:57:01.347912 208829 main.go:130] libmachine: SSH cmd err, output: : I0221 08:57:01.347952 208829 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube} I0221 08:57:01.348022 208829 ubuntu.go:177] setting up certificates I0221 08:57:01.348044 208829 provision.go:83] configureAuth start I0221 08:57:01.348098 208829 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20220221084933-6550 I0221 08:57:01.383523 208829 provision.go:138] copyHostCerts I0221 08:57:01.383615 208829 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem, removing ... I0221 08:57:01.383628 208829 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem I0221 08:57:01.383688 208829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem (1675 bytes) I0221 08:57:01.383771 208829 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem, removing ... I0221 08:57:01.383783 208829 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem I0221 08:57:01.383804 208829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem (1082 bytes) I0221 08:57:01.384445 208829 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem, removing ... I0221 08:57:01.384509 208829 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem I0221 08:57:01.384564 208829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem (1123 bytes) I0221 08:57:01.384699 208829 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem org=jenkins.auto-20220221084933-6550 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube auto-20220221084933-6550] I0221 08:57:01.504349 208829 provision.go:172] copyRemoteCerts I0221 08:57:01.504402 208829 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0221 08:57:01.504434 208829 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220221084933-6550 I0221 08:57:01.538951 208829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49379 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/auto-20220221084933-6550/id_rsa Username:docker} I0221 08:57:01.626693 208829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes) I0221 08:57:01.644880 208829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes) I0221 08:57:01.663373 208829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0221 08:57:01.681925 208829 provision.go:86] duration metric: configureAuth took 333.866692ms I0221 08:57:01.681956 208829 ubuntu.go:193] setting minikube options for container-runtime I0221 08:57:01.682119 208829 config.go:176] Loaded profile config "auto-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 08:57:01.682172 208829 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220221084933-6550 I0221 08:57:01.716679 208829 main.go:130] libmachine: Using SSH client type: native I0221 08:57:01.716831 208829 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49379 } I0221 08:57:01.716844 208829 main.go:130] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0221 08:57:01.839716 208829 main.go:130] libmachine: SSH cmd err, output: : overlay I0221 08:57:01.839749 208829 ubuntu.go:71] root file system type: overlay I0221 08:57:01.839983 208829 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0221 08:57:01.840047 208829 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220221084933-6550 I0221 08:57:01.884181 208829 main.go:130] libmachine: Using SSH client type: native I0221 08:57:01.884320 208829 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49379 } I0221 08:57:01.884394 208829 main.go:130] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0221 08:57:02.018366 208829 main.go:130] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0221 08:57:02.018469 208829 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220221084933-6550 I0221 08:57:02.061379 208829 main.go:130] libmachine: Using SSH client type: native I0221 08:57:02.061568 208829 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49379 } I0221 08:57:02.061598 208829 main.go:130] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0221 08:57:02.838157 208829 main.go:130] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2021-12-13 11:43:42.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2022-02-21 08:57:02.011813270 +0000 @@ -1,30 +1,32 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com +BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target -Requires=docker.socket containerd.service +Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always - -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 +Restart=on-failure -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +34,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0221 08:57:02.838207 208829 machine.go:91] provisioned docker machine in 1.851064371s I0221 08:57:02.838217 208829 client.go:171] LocalClient.Create took 9.735413411s I0221 08:57:02.838234 208829 start.go:168] duration metric: libmachine.API.Create for "auto-20220221084933-6550" took 9.735486959s I0221 08:57:02.838242 208829 start.go:267] post-start starting for "auto-20220221084933-6550" (driver="docker") I0221 08:57:02.838250 208829 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0221 08:57:02.838307 208829 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0221 08:57:02.838350 208829 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220221084933-6550 I0221 08:57:02.874473 208829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49379 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/auto-20220221084933-6550/id_rsa Username:docker} I0221 08:57:02.968610 208829 ssh_runner.go:195] Run: cat /etc/os-release I0221 08:57:02.972155 208829 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0221 08:57:02.972187 208829 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0221 08:57:02.972200 208829 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0221 08:57:02.972207 208829 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0221 08:57:02.972221 208829 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/addons for local assets ... I0221 08:57:02.972277 208829 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files for local assets ... I0221 08:57:02.972364 208829 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem -> 65502.pem in /etc/ssl/certs I0221 08:57:02.972460 208829 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs I0221 08:57:02.982082 208829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /etc/ssl/certs/65502.pem (1708 bytes) I0221 08:57:03.032306 208829 start.go:270] post-start completed in 194.048524ms I0221 08:57:03.032660 208829 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20220221084933-6550 I0221 08:57:03.072584 208829 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/config.json ... I0221 08:57:03.072847 208829 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0221 08:57:03.072892 208829 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220221084933-6550 I0221 08:57:03.110734 208829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49379 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/auto-20220221084933-6550/id_rsa Username:docker} I0221 08:57:03.199585 208829 start.go:129] duration metric: createHost completed in 10.100444989s I0221 08:57:03.199664 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} W0221 08:57:03.240885 208829 fix.go:134] unexpected machine state, will restart: I0221 08:57:03.240925 208829 machine.go:88] provisioning docker machine ... I0221 08:57:03.240947 208829 ubuntu.go:169] provisioning hostname "auto-20220221084933-6550" I0221 08:57:03.241037 208829 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220221084933-6550 I0221 08:57:03.279603 208829 main.go:130] libmachine: Using SSH client type: native I0221 08:57:03.279808 208829 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49379 } I0221 08:57:03.279836 208829 main.go:130] libmachine: About to run SSH command: sudo hostname auto-20220221084933-6550 && echo "auto-20220221084933-6550" | sudo tee /etc/hostname I0221 08:57:03.418493 208829 main.go:130] libmachine: SSH cmd err, output: : auto-20220221084933-6550 I0221 08:57:03.418569 208829 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220221084933-6550 I0221 08:57:03.458210 208829 main.go:130] libmachine: Using SSH client type: native I0221 08:57:03.458405 208829 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49379 } I0221 08:57:03.458437 208829 main.go:130] libmachine: About to run SSH command: if ! grep -xq '.*\sauto-20220221084933-6550' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-20220221084933-6550/g' /etc/hosts; else echo '127.0.1.1 auto-20220221084933-6550' | sudo tee -a /etc/hosts; fi fi I0221 08:57:03.586755 208829 main.go:130] libmachine: SSH cmd err, output: : I0221 08:57:03.586795 208829 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube} I0221 08:57:03.586829 208829 ubuntu.go:177] setting up certificates I0221 08:57:03.586839 208829 provision.go:83] configureAuth start I0221 08:57:03.586896 208829 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20220221084933-6550 I0221 08:57:03.628926 208829 provision.go:138] copyHostCerts I0221 08:57:03.628997 208829 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem, removing ... I0221 08:57:03.629014 208829 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem I0221 08:57:03.629092 208829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem (1082 bytes) I0221 08:57:03.629179 208829 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem, removing ... I0221 08:57:03.629195 208829 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem I0221 08:57:03.629228 208829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem (1123 bytes) I0221 08:57:03.629294 208829 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem, removing ... I0221 08:57:03.629308 208829 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem I0221 08:57:03.629336 208829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem (1675 bytes) I0221 08:57:03.629390 208829 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem org=jenkins.auto-20220221084933-6550 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube auto-20220221084933-6550] I0221 08:57:03.991600 208829 provision.go:172] copyRemoteCerts I0221 08:57:03.991662 208829 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0221 08:57:03.991694 208829 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220221084933-6550 I0221 08:57:04.026718 208829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49379 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/auto-20220221084933-6550/id_rsa Username:docker} I0221 08:57:04.116065 208829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes) I0221 08:57:04.138038 208829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem --> /etc/docker/server.pem (1245 bytes) I0221 08:57:04.160814 208829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0221 08:57:04.180299 208829 provision.go:86] duration metric: configureAuth took 593.439078ms I0221 08:57:04.180335 208829 ubuntu.go:193] setting minikube options for container-runtime I0221 08:57:04.180508 208829 config.go:176] Loaded profile config "auto-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 08:57:04.180555 208829 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220221084933-6550 I0221 08:57:04.218384 208829 main.go:130] libmachine: Using SSH client type: native I0221 08:57:04.218602 208829 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49379 } I0221 08:57:04.218623 208829 main.go:130] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0221 08:57:04.349505 208829 main.go:130] libmachine: SSH cmd err, output: : overlay I0221 08:57:04.349534 208829 ubuntu.go:71] root file system type: overlay I0221 08:57:04.349727 208829 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0221 08:57:04.349790 208829 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220221084933-6550 I0221 08:57:04.387207 208829 main.go:130] libmachine: Using SSH client type: native I0221 08:57:04.387390 208829 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49379 } I0221 08:57:04.387497 208829 main.go:130] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0221 08:57:04.522446 208829 main.go:130] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0221 08:57:04.522540 208829 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220221084933-6550 I0221 08:57:04.564759 208829 main.go:130] libmachine: Using SSH client type: native I0221 08:57:04.564947 208829 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49379 } I0221 08:57:04.564981 208829 main.go:130] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0221 08:57:04.690709 208829 main.go:130] libmachine: SSH cmd err, output: : I0221 08:57:04.690733 208829 machine.go:91] provisioned docker machine in 1.449802307s I0221 08:57:04.690746 208829 start.go:267] post-start starting for "auto-20220221084933-6550" (driver="docker") I0221 08:57:04.690751 208829 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0221 08:57:04.690796 208829 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0221 08:57:04.690832 208829 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220221084933-6550 I0221 08:57:04.737205 208829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49379 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/auto-20220221084933-6550/id_rsa Username:docker} I0221 08:57:04.832817 208829 ssh_runner.go:195] Run: cat /etc/os-release I0221 08:57:04.836638 208829 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0221 08:57:04.836675 208829 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0221 08:57:04.836688 208829 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0221 08:57:04.836695 208829 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0221 08:57:04.836709 208829 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/addons for local assets ... I0221 08:57:04.836773 208829 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files for local assets ... I0221 08:57:04.836854 208829 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem -> 65502.pem in /etc/ssl/certs I0221 08:57:04.836951 208829 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs I0221 08:57:04.848055 208829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /etc/ssl/certs/65502.pem (1708 bytes) I0221 08:57:04.871820 208829 start.go:270] post-start completed in 181.057737ms I0221 08:57:04.871882 208829 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0221 08:57:04.871922 208829 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220221084933-6550 I0221 08:57:04.909364 208829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49379 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/auto-20220221084933-6550/id_rsa Username:docker} I0221 08:57:05.004082 208829 fix.go:57] fixHost completed within 3m17.180729915s I0221 08:57:05.004116 208829 start.go:80] releasing machines lock for "auto-20220221084933-6550", held for 3m17.1807932s I0221 08:57:05.004203 208829 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20220221084933-6550 I0221 08:57:05.058656 208829 ssh_runner.go:195] Run: sudo service containerd status I0221 08:57:05.058691 208829 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/ I0221 08:57:05.058719 208829 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220221084933-6550 I0221 08:57:05.058747 208829 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220221084933-6550 I0221 08:57:05.100435 208829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49379 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/auto-20220221084933-6550/id_rsa Username:docker} I0221 08:57:05.100436 208829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49379 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/auto-20220221084933-6550/id_rsa Username:docker} I0221 08:57:05.208067 208829 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 08:57:05.344794 208829 cruntime.go:272] skipping containerd shutdown because we are bound to it I0221 08:57:05.344863 208829 ssh_runner.go:195] Run: sudo service crio status I0221 08:57:05.371501 208829 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0221 08:57:05.384684 208829 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 08:57:05.394112 208829 ssh_runner.go:195] Run: sudo service docker status I0221 08:57:05.412284 208829 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 08:57:05.462323 208829 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 08:57:05.510852 208829 out.go:203] * Preparing Kubernetes v1.23.4 on Docker 20.10.12 ... I0221 08:57:05.510947 208829 cli_runner.go:133] Run: docker network inspect auto-20220221084933-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 08:57:05.549082 208829 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts I0221 08:57:05.552663 208829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 08:57:03.608727 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:06.083201 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:05.565730 208829 out.go:176] - kubelet.housekeeping-interval=5m I0221 08:57:05.565811 208829 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 08:57:05.565865 208829 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0221 08:57:05.606956 208829 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.4 k8s.gcr.io/kube-proxy:v1.23.4 k8s.gcr.io/kube-controller-manager:v1.23.4 k8s.gcr.io/kube-scheduler:v1.23.4 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0221 08:57:05.607025 208829 docker.go:537] Images already preloaded, skipping extraction I0221 08:57:05.607086 208829 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0221 08:57:05.649928 208829 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.4 k8s.gcr.io/kube-proxy:v1.23.4 k8s.gcr.io/kube-scheduler:v1.23.4 k8s.gcr.io/kube-controller-manager:v1.23.4 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0221 08:57:05.649954 208829 cache_images.go:84] Images are preloaded, skipping loading I0221 08:57:05.649996 208829 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0221 08:57:05.754698 208829 cni.go:93] Creating CNI manager for "" I0221 08:57:05.754720 208829 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0221 08:57:05.754727 208829 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0221 08:57:05.754740 208829 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.23.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-20220221084933-6550 NodeName:auto-20220221084933-6550 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0221 08:57:05.754849 208829 kubeadm.go:162] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.76.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "auto-20220221084933-6550" kubeletExtraArgs: node-ip: 192.168.76.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.76.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.23.4 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0221 08:57:05.754928 208829 kubeadm.go:936] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.23.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=auto-20220221084933-6550 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 [Install] config: {KubernetesVersion:v1.23.4 ClusterName:auto-20220221084933-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0221 08:57:05.754968 208829 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.4 I0221 08:57:05.763653 208829 binaries.go:44] Found k8s binaries, skipping transfer I0221 08:57:05.763829 208829 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /var/lib/minikube /etc/init.d I0221 08:57:05.772841 208829 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes) I0221 08:57:05.788989 208829 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0221 08:57:05.805034 208829 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2046 bytes) I0221 08:57:05.819979 208829 ssh_runner.go:362] scp memory --> /var/lib/minikube/openrc-restart-wrapper.sh (233 bytes) I0221 08:57:05.835873 208829 ssh_runner.go:362] scp memory --> /etc/init.d/kubelet (839 bytes) I0221 08:57:05.850716 208829 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts I0221 08:57:05.854214 208829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 08:57:05.866093 208829 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550 for IP: 192.168.76.2 I0221 08:57:05.866210 208829 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key I0221 08:57:05.866261 208829 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key I0221 08:57:05.866320 208829 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.key I0221 08:57:05.866339 208829 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt with IP's: [] I0221 08:57:05.946527 208829 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt ... I0221 08:57:05.946560 208829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: {Name:mkf66599337a85f926bbf47bc67309a30f586d39 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:57:05.946728 208829 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.key ... I0221 08:57:05.946743 208829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.key: {Name:mk4b98916e364b75e052175ceff980d7dfb7d59c Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:57:05.946835 208829 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/apiserver.key.31bdca25 I0221 08:57:05.946858 208829 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1] I0221 08:57:06.102490 208829 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/apiserver.crt.31bdca25 ... I0221 08:57:06.102522 208829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/apiserver.crt.31bdca25: {Name:mk69d8d8d16b926e465f137654650f785385ca18 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:57:06.102679 208829 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/apiserver.key.31bdca25 ... I0221 08:57:06.102692 208829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/apiserver.key.31bdca25: {Name:mkd2971b67162a2c822475fe096d0b0e4ec0054c Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:57:06.102789 208829 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/apiserver.crt I0221 08:57:06.102842 208829 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/apiserver.key I0221 08:57:06.102884 208829 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/proxy-client.key I0221 08:57:06.102898 208829 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/proxy-client.crt with IP's: [] I0221 08:57:06.201893 208829 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/proxy-client.crt ... I0221 08:57:06.201927 208829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/proxy-client.crt: {Name:mk80af5c6cf1913702c41b816aa4d84fc4ef770d Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:57:06.202100 208829 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/proxy-client.key ... I0221 08:57:06.202114 208829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/proxy-client.key: {Name:mk0ab9a45b46877a73f835519f0bc8a4becdda03 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:57:06.202272 208829 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem (1338 bytes) W0221 08:57:06.202308 208829 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550_empty.pem, impossibly tiny 0 bytes I0221 08:57:06.202319 208829 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem (1679 bytes) I0221 08:57:06.202344 208829 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem (1082 bytes) I0221 08:57:06.202367 208829 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem (1123 bytes) I0221 08:57:06.202393 208829 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem (1675 bytes) I0221 08:57:06.202432 208829 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem (1708 bytes) I0221 08:57:06.203351 208829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0221 08:57:06.222080 208829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes) I0221 08:57:06.240034 208829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0221 08:57:06.257937 208829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I0221 08:57:06.275915 208829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0221 08:57:06.294110 208829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0221 08:57:06.312374 208829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0221 08:57:06.335313 208829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0221 08:57:06.357500 208829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem --> /usr/share/ca-certificates/6550.pem (1338 bytes) I0221 08:57:06.377803 208829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /usr/share/ca-certificates/65502.pem (1708 bytes) I0221 08:57:06.396818 208829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0221 08:57:06.416836 208829 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0221 08:57:06.432177 208829 ssh_runner.go:195] Run: openssl version I0221 08:57:06.438394 208829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6550.pem && ln -fs /usr/share/ca-certificates/6550.pem /etc/ssl/certs/6550.pem" I0221 08:57:06.446874 208829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6550.pem I0221 08:57:06.450267 208829 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Feb 21 08:30 /usr/share/ca-certificates/6550.pem I0221 08:57:06.450322 208829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6550.pem I0221 08:57:06.456518 208829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6550.pem /etc/ssl/certs/51391683.0" I0221 08:57:06.465032 208829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65502.pem && ln -fs /usr/share/ca-certificates/65502.pem /etc/ssl/certs/65502.pem" I0221 08:57:06.473821 208829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65502.pem I0221 08:57:06.478022 208829 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Feb 21 08:30 /usr/share/ca-certificates/65502.pem I0221 08:57:06.478083 208829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65502.pem I0221 08:57:06.484537 208829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65502.pem /etc/ssl/certs/3ec20f2e.0" I0221 08:57:06.493760 208829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0221 08:57:06.501591 208829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0221 08:57:06.505292 208829 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb 21 08:26 /usr/share/ca-certificates/minikubeCA.pem I0221 08:57:06.505349 208829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0221 08:57:06.510784 208829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0221 08:57:06.519268 208829 kubeadm.go:391] StartCluster: {Name:auto-20220221084933-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:auto-20220221084933-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 08:57:06.519384 208829 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0221 08:57:06.558740 208829 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0221 08:57:06.576334 208829 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0221 08:57:06.586331 208829 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver I0221 08:57:06.586391 208829 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0221 08:57:06.595753 208829 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0221 08:57:06.595802 208829 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0221 08:57:02.593875 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:05.095863 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:07.226129 208829 out.go:203] - Generating certificates and keys ... I0221 08:57:08.606947 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:11.085043 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:07.593598 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:09.595599 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:11.600301 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:09.741262 208829 out.go:203] - Booting up control plane ... I0221 08:57:13.606594 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:16.104269 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:14.093831 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:16.094542 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:17.794661 208829 out.go:203] - Configuring RBAC rules ... I0221 08:57:18.211217 208829 cni.go:93] Creating CNI manager for "" I0221 08:57:18.211242 208829 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0221 08:57:18.211265 208829 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0221 08:57:18.211404 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:18.211499 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=9a4c39b14752448ee8b5da7f8bb397c2a16d9ea9 minikube.k8s.io/name=auto-20220221084933-6550 minikube.k8s.io/updated_at=2022_02_21T08_57_18_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:18.237810 208829 ops.go:34] apiserver oom_adj: -16 I0221 08:57:18.404731 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:18.582815 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:20.585066 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:18.094583 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:20.594516 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:19.390100 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:19.889764 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:20.389415 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:20.889525 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:21.389657 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:21.889423 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:22.389317 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:22.889659 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:23.389482 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:23.889350 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:23.083375 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:25.108449 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:23.094746 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:25.094898 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:27.096067 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:24.389561 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:24.890221 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:25.389547 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:25.889416 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:26.389374 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:26.889998 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:27.389296 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:27.889606 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:28.389536 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:28.890229 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:27.607457 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:29.607786 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:29.389938 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:29.890263 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:30.389266 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:30.889421 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:31.389996 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:31.530744 208829 kubeadm.go:1020] duration metric: took 13.31938415s to wait for elevateKubeSystemPrivileges. I0221 08:57:31.530783 208829 kubeadm.go:393] StartCluster complete in 25.011523066s I0221 08:57:31.530804 208829 settings.go:142] acquiring lock: {Name:mk4400923ef35d7d80e21aa000bc7683aef0fb09 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:57:31.530919 208829 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 08:57:31.532695 208829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig: {Name:mkbaf7b3f4ffbe8a9d57707d423380523fa909e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:57:32.057336 208829 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "auto-20220221084933-6550" rescaled to 1 I0221 08:57:32.057421 208829 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true} I0221 08:57:32.057439 208829 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0221 08:57:32.060752 208829 out.go:176] * Verifying Kubernetes components... I0221 08:57:32.057752 208829 config.go:176] Loaded profile config "auto-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 08:57:32.057772 208829 addons.go:415] enableAddons start: toEnable=map[], additional=[] I0221 08:57:32.060950 208829 addons.go:65] Setting storage-provisioner=true in profile "auto-20220221084933-6550" I0221 08:57:32.060975 208829 addons.go:153] Setting addon storage-provisioner=true in "auto-20220221084933-6550" W0221 08:57:32.060982 208829 addons.go:165] addon storage-provisioner should already be in state true I0221 08:57:32.060820 208829 ssh_runner.go:195] Run: sudo service kubelet status I0221 08:57:32.061027 208829 host.go:66] Checking if "auto-20220221084933-6550" exists ... I0221 08:57:32.061101 208829 addons.go:65] Setting default-storageclass=true in profile "auto-20220221084933-6550" I0221 08:57:32.061124 208829 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-20220221084933-6550" I0221 08:57:32.061419 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:57:32.061567 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:57:29.594682 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:31.595072 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:32.117227 208829 out.go:176] - Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0221 08:57:32.117371 208829 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml I0221 08:57:32.117382 208829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0221 08:57:32.117435 208829 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220221084933-6550 I0221 08:57:32.121052 208829 addons.go:153] Setting addon default-storageclass=true in "auto-20220221084933-6550" W0221 08:57:32.121074 208829 addons.go:165] addon default-storageclass should already be in state true I0221 08:57:32.121097 208829 host.go:66] Checking if "auto-20220221084933-6550" exists ... I0221 08:57:32.121616 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:57:32.164256 208829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49379 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/auto-20220221084933-6550/id_rsa Username:docker} I0221 08:57:32.164577 208829 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml I0221 08:57:32.164596 208829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0221 08:57:32.164645 208829 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220221084933-6550 I0221 08:57:32.198642 208829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49379 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/auto-20220221084933-6550/id_rsa Username:docker} I0221 08:57:32.232356 208829 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.76.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0221 08:57:32.235124 208829 node_ready.go:35] waiting up to 5m0s for node "auto-20220221084933-6550" to be "Ready" ... I0221 08:57:32.304584 208829 node_ready.go:49] node "auto-20220221084933-6550" has status "Ready":"True" I0221 08:57:32.304610 208829 node_ready.go:38] duration metric: took 69.457998ms waiting for node "auto-20220221084933-6550" to be "Ready" ... I0221 08:57:32.304620 208829 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 08:57:32.317676 208829 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-6wgl9" in "kube-system" namespace to be "Ready" ... I0221 08:57:32.426161 208829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0221 08:57:32.431088 208829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0221 08:57:33.832503 208829 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.76.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.600104705s) I0221 08:57:33.832609 208829 start.go:777] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS I0221 08:57:33.832546 208829 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.406350623s) I0221 08:57:33.903748 208829 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.472601501s) I0221 08:57:33.907151 208829 out.go:176] * Enabled addons: default-storageclass, storage-provisioner I0221 08:57:33.907249 208829 addons.go:417] enableAddons completed in 1.84947813s I0221 08:57:32.085234 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:34.109374 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:36.583295 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:34.093783 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:36.095122 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:34.337160 208829 pod_ready.go:92] pod "coredns-64897985d-6wgl9" in "kube-system" namespace has status "Ready":"True" I0221 08:57:34.337188 208829 pod_ready.go:81] duration metric: took 2.019460111s waiting for pod "coredns-64897985d-6wgl9" in "kube-system" namespace to be "Ready" ... I0221 08:57:34.337200 208829 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-rg6k7" in "kube-system" namespace to be "Ready" ... I0221 08:57:36.348968 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:57:38.848999 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:57:39.105966 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:41.606692 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:38.593566 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:40.593916 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:41.350153 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:57:43.848525 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:57:44.106976 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:46.583983 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:42.594575 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:44.594678 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:46.594775 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:45.850432 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:57:48.348860 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:57:49.084072 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:51.112230 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:49.093600 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:51.093716 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:50.349391 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:57:52.350161 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:57:53.606853 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:55.607543 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:53.594138 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:55.594195 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:54.850341 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:57:57.349412 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:57:58.108377 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:00.608452 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:58.094464 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:00.594174 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:59.349483 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:01.851137 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:03.082697 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:05.107411 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:03.094260 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:05.097983 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:04.348276 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:06.349068 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:08.349553 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:07.583427 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:10.086403 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:07.594946 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:10.095115 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:10.848406 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:13.348644 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:12.582090 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:14.607319 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:12.593715 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:14.594295 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:17.097192 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:15.349934 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:17.850273 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:17.083915 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:19.607890 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:19.593497 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:21.593740 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:20.349272 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:22.851407 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:22.082238 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:24.107976 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:26.608511 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:23.594026 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:26.094324 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:25.348995 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:27.356185 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:29.107566 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:31.108790 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:28.594956 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:31.094580 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:29.850008 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:32.349818 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:33.582823 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:35.586175 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:33.593910 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:35.595299 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:34.848401 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:36.848883 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:38.849110 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:37.607126 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:40.082258 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:38.093960 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:40.094102 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:42.095073 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:40.849629 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:43.349389 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:42.108072 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:44.607510 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:46.608936 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:44.593597 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:46.594499 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:45.849561 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:48.348597 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:48.609972 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:51.082477 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:48.594616 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:50.594840 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:50.348823 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:52.848990 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:53.105968 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:55.582165 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:53.094539 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:55.094604 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:54.849975 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:57.349154 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:57.606112 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:59.608167 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:57.593439 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:59.593598 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:01.594070 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:59.349596 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:01.849909 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:02.106572 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:59:04.107313 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:59:06.108123 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:59:04.094375 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:06.593739 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:04.349290 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:06.849034 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:08.849142 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:08.108992 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:59:10.582664 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:59:08.594057 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:10.594906 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:10.849260 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:12.849348 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:12.583673 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:59:14.112706 223679 pod_ready.go:81] duration metric: took 4m0.048450561s waiting for pod "calico-node-zcdj6" in "kube-system" namespace to be "Ready" ... E0221 08:59:14.112734 223679 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition I0221 08:59:14.112746 223679 pod_ready.go:78] waiting up to 5m0s for pod "etcd-calico-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:14.117793 223679 pod_ready.go:92] pod "etcd-calico-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 08:59:14.117820 223679 pod_ready.go:81] duration metric: took 5.066157ms waiting for pod "etcd-calico-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:14.117832 223679 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-calico-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:14.122627 223679 pod_ready.go:92] pod "kube-apiserver-calico-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 08:59:14.122647 223679 pod_ready.go:81] duration metric: took 4.807147ms waiting for pod "kube-apiserver-calico-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:14.122656 223679 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-calico-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:14.127594 223679 pod_ready.go:92] pod "kube-controller-manager-calico-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 08:59:14.127616 223679 pod_ready.go:81] duration metric: took 4.954276ms waiting for pod "kube-controller-manager-calico-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:14.127627 223679 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-kwcvx" in "kube-system" namespace to be "Ready" ... I0221 08:59:14.480801 223679 pod_ready.go:92] pod "kube-proxy-kwcvx" in "kube-system" namespace has status "Ready":"True" I0221 08:59:14.480829 223679 pod_ready.go:81] duration metric: took 353.19554ms waiting for pod "kube-proxy-kwcvx" in "kube-system" namespace to be "Ready" ... I0221 08:59:14.480842 223679 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-calico-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:14.879906 223679 pod_ready.go:92] pod "kube-scheduler-calico-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 08:59:14.879927 223679 pod_ready.go:81] duration metric: took 399.077104ms waiting for pod "kube-scheduler-calico-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:14.879937 223679 pod_ready.go:38] duration metric: took 4m0.837387313s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 08:59:14.879961 223679 api_server.go:51] waiting for apiserver process to appear ... I0221 08:59:14.880012 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0221 08:59:14.942433 223679 logs.go:274] 1 containers: [5b808a7ef4a2] I0221 08:59:14.942510 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0221 08:59:15.037787 223679 logs.go:274] 1 containers: [96cc9489b33e] I0221 08:59:15.037848 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0221 08:59:15.134487 223679 logs.go:274] 0 containers: [] W0221 08:59:15.134520 223679 logs.go:276] No container was found matching "coredns" I0221 08:59:15.134573 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0221 08:59:15.229656 223679 logs.go:274] 1 containers: [f012d1d45e22] I0221 08:59:15.229733 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0221 08:59:15.320906 223679 logs.go:274] 1 containers: [449cc37a92fe] I0221 08:59:15.320985 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0221 08:59:15.417453 223679 logs.go:274] 0 containers: [] W0221 08:59:15.417481 223679 logs.go:276] No container was found matching "kubernetes-dashboard" I0221 08:59:15.417528 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0221 08:59:15.513893 223679 logs.go:274] 2 containers: [528acfa448ce f6cf402c0c9d] I0221 08:59:15.513990 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0221 08:59:15.550415 223679 logs.go:274] 1 containers: [cddc9ef001f2] I0221 08:59:15.550454 223679 logs.go:123] Gathering logs for dmesg ... I0221 08:59:15.550465 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0221 08:59:15.576242 223679 logs.go:123] Gathering logs for etcd [96cc9489b33e] ... I0221 08:59:15.576295 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc9489b33e" I0221 08:59:15.618102 223679 logs.go:123] Gathering logs for kube-proxy [449cc37a92fe] ... I0221 08:59:15.618136 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 449cc37a92fe" I0221 08:59:15.656954 223679 logs.go:123] Gathering logs for storage-provisioner [528acfa448ce] ... I0221 08:59:15.656987 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 528acfa448ce" I0221 08:59:15.722111 223679 logs.go:123] Gathering logs for storage-provisioner [f6cf402c0c9d] ... I0221 08:59:15.722147 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6cf402c0c9d" I0221 08:59:15.808702 223679 logs.go:123] Gathering logs for Docker ... I0221 08:59:15.808737 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0221 08:59:15.889269 223679 logs.go:123] Gathering logs for container status ... I0221 08:59:15.889312 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0221 08:59:15.945538 223679 logs.go:123] Gathering logs for kubelet ... I0221 08:59:15.945571 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0221 08:59:16.147141 223679 logs.go:123] Gathering logs for describe nodes ... I0221 08:59:16.147186 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0221 08:59:16.338070 223679 logs.go:123] Gathering logs for kube-apiserver [5b808a7ef4a2] ... I0221 08:59:16.338111 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b808a7ef4a2" I0221 08:59:16.431605 223679 logs.go:123] Gathering logs for kube-scheduler [f012d1d45e22] ... I0221 08:59:16.431645 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f012d1d45e22" I0221 08:59:16.530228 223679 logs.go:123] Gathering logs for kube-controller-manager [cddc9ef001f2] ... I0221 08:59:16.530264 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cddc9ef001f2" I0221 08:59:12.595167 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:15.094611 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:15.348719 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:17.348992 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:19.103148 223679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0221 08:59:19.129062 223679 api_server.go:71] duration metric: took 4m5.106529752s to wait for apiserver process to appear ... I0221 08:59:19.129100 223679 api_server.go:87] waiting for apiserver healthz status ... I0221 08:59:19.129165 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0221 08:59:19.224393 223679 logs.go:274] 1 containers: [5b808a7ef4a2] I0221 08:59:19.224460 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0221 08:59:19.319828 223679 logs.go:274] 1 containers: [96cc9489b33e] I0221 08:59:19.319900 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0221 08:59:19.418463 223679 logs.go:274] 0 containers: [] W0221 08:59:19.418495 223679 logs.go:276] No container was found matching "coredns" I0221 08:59:19.418541 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0221 08:59:19.516431 223679 logs.go:274] 1 containers: [f012d1d45e22] I0221 08:59:19.516522 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0221 08:59:19.607457 223679 logs.go:274] 1 containers: [449cc37a92fe] I0221 08:59:19.607543 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0221 08:59:19.644308 223679 logs.go:274] 0 containers: [] W0221 08:59:19.644330 223679 logs.go:276] No container was found matching "kubernetes-dashboard" I0221 08:59:19.644368 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0221 08:59:19.677987 223679 logs.go:274] 1 containers: [528acfa448ce] I0221 08:59:19.678065 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0221 08:59:19.711573 223679 logs.go:274] 1 containers: [cddc9ef001f2] I0221 08:59:19.711614 223679 logs.go:123] Gathering logs for dmesg ... I0221 08:59:19.711634 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0221 08:59:19.739316 223679 logs.go:123] Gathering logs for describe nodes ... I0221 08:59:19.739352 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0221 08:59:19.829642 223679 logs.go:123] Gathering logs for etcd [96cc9489b33e] ... I0221 08:59:19.829686 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc9489b33e" I0221 08:59:19.928327 223679 logs.go:123] Gathering logs for kube-scheduler [f012d1d45e22] ... I0221 08:59:19.928367 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f012d1d45e22" I0221 08:59:20.030039 223679 logs.go:123] Gathering logs for storage-provisioner [528acfa448ce] ... I0221 08:59:20.030084 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 528acfa448ce" I0221 08:59:20.115493 223679 logs.go:123] Gathering logs for kubelet ... I0221 08:59:20.115539 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0221 08:59:20.289828 223679 logs.go:123] Gathering logs for kube-proxy [449cc37a92fe] ... I0221 08:59:20.289874 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 449cc37a92fe" I0221 08:59:20.351337 223679 logs.go:123] Gathering logs for kube-controller-manager [cddc9ef001f2] ... I0221 08:59:20.351388 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cddc9ef001f2" I0221 08:59:20.480018 223679 logs.go:123] Gathering logs for Docker ... I0221 08:59:20.480056 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0221 08:59:20.594320 223679 logs.go:123] Gathering logs for container status ... I0221 08:59:20.594358 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0221 08:59:20.641023 223679 logs.go:123] Gathering logs for kube-apiserver [5b808a7ef4a2] ... I0221 08:59:20.641062 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b808a7ef4a2" I0221 08:59:17.594243 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:20.094535 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:22.095445 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:19.849214 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:22.349291 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:23.238237 223679 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ... I0221 08:59:23.244347 223679 api_server.go:266] https://192.168.67.2:8443/healthz returned 200: ok I0221 08:59:23.246494 223679 api_server.go:140] control plane version: v1.23.4 I0221 08:59:23.246519 223679 api_server.go:130] duration metric: took 4.1174116s to wait for apiserver health ... I0221 08:59:23.246529 223679 system_pods.go:43] waiting for kube-system pods to appear ... I0221 08:59:23.246581 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0221 08:59:23.331088 223679 logs.go:274] 1 containers: [5b808a7ef4a2] I0221 08:59:23.331164 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0221 08:59:23.425220 223679 logs.go:274] 1 containers: [96cc9489b33e] I0221 08:59:23.425297 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0221 08:59:23.510198 223679 logs.go:274] 0 containers: [] W0221 08:59:23.510230 223679 logs.go:276] No container was found matching "coredns" I0221 08:59:23.510284 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0221 08:59:23.548794 223679 logs.go:274] 1 containers: [f012d1d45e22] I0221 08:59:23.548859 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0221 08:59:23.642803 223679 logs.go:274] 1 containers: [449cc37a92fe] I0221 08:59:23.642891 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0221 08:59:23.735232 223679 logs.go:274] 0 containers: [] W0221 08:59:23.735263 223679 logs.go:276] No container was found matching "kubernetes-dashboard" I0221 08:59:23.735316 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0221 08:59:23.820175 223679 logs.go:274] 1 containers: [528acfa448ce] I0221 08:59:23.820245 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0221 08:59:23.911162 223679 logs.go:274] 1 containers: [cddc9ef001f2] I0221 08:59:23.911205 223679 logs.go:123] Gathering logs for storage-provisioner [528acfa448ce] ... I0221 08:59:23.911218 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 528acfa448ce" I0221 08:59:24.010277 223679 logs.go:123] Gathering logs for kubelet ... I0221 08:59:24.010307 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0221 08:59:24.188331 223679 logs.go:123] Gathering logs for dmesg ... I0221 08:59:24.188378 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0221 08:59:24.235517 223679 logs.go:123] Gathering logs for describe nodes ... I0221 08:59:24.235564 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0221 08:59:24.433778 223679 logs.go:123] Gathering logs for kube-scheduler [f012d1d45e22] ... I0221 08:59:24.433815 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f012d1d45e22" I0221 08:59:24.542462 223679 logs.go:123] Gathering logs for Docker ... I0221 08:59:24.542562 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0221 08:59:24.683898 223679 logs.go:123] Gathering logs for container status ... I0221 08:59:24.683938 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0221 08:59:24.747804 223679 logs.go:123] Gathering logs for kube-apiserver [5b808a7ef4a2] ... I0221 08:59:24.747846 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b808a7ef4a2" I0221 08:59:24.839623 223679 logs.go:123] Gathering logs for etcd [96cc9489b33e] ... I0221 08:59:24.839664 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc9489b33e" I0221 08:59:24.933214 223679 logs.go:123] Gathering logs for kube-proxy [449cc37a92fe] ... I0221 08:59:24.933249 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 449cc37a92fe" I0221 08:59:24.970081 223679 logs.go:123] Gathering logs for kube-controller-manager [cddc9ef001f2] ... I0221 08:59:24.970115 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cddc9ef001f2" I0221 08:59:24.593641 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:25.099642 227869 pod_ready.go:81] duration metric: took 4m0.023714023s waiting for pod "coredns-64897985d-fw5hd" in "kube-system" namespace to be "Ready" ... E0221 08:59:25.099664 227869 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition I0221 08:59:25.099673 227869 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-kn627" in "kube-system" namespace to be "Ready" ... I0221 08:59:25.101152 227869 pod_ready.go:97] error getting pod "coredns-64897985d-kn627" in "kube-system" namespace (skipping!): pods "coredns-64897985d-kn627" not found I0221 08:59:25.101173 227869 pod_ready.go:81] duration metric: took 1.494584ms waiting for pod "coredns-64897985d-kn627" in "kube-system" namespace to be "Ready" ... E0221 08:59:25.101182 227869 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-64897985d-kn627" in "kube-system" namespace (skipping!): pods "coredns-64897985d-kn627" not found I0221 08:59:25.101190 227869 pod_ready.go:78] waiting up to 5m0s for pod "etcd-custom-weave-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:25.105178 227869 pod_ready.go:92] pod "etcd-custom-weave-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 08:59:25.105196 227869 pod_ready.go:81] duration metric: took 3.99997ms waiting for pod "etcd-custom-weave-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:25.105204 227869 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-custom-weave-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:25.109930 227869 pod_ready.go:92] pod "kube-apiserver-custom-weave-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 08:59:25.109949 227869 pod_ready.go:81] duration metric: took 4.739462ms waiting for pod "kube-apiserver-custom-weave-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:25.109958 227869 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-custom-weave-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:25.292675 227869 pod_ready.go:92] pod "kube-controller-manager-custom-weave-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 08:59:25.292711 227869 pod_ready.go:81] duration metric: took 182.734028ms waiting for pod "kube-controller-manager-custom-weave-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:25.292723 227869 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-q4stn" in "kube-system" namespace to be "Ready" ... I0221 08:59:25.691815 227869 pod_ready.go:92] pod "kube-proxy-q4stn" in "kube-system" namespace has status "Ready":"True" I0221 08:59:25.691839 227869 pod_ready.go:81] duration metric: took 399.108423ms waiting for pod "kube-proxy-q4stn" in "kube-system" namespace to be "Ready" ... I0221 08:59:25.691848 227869 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-custom-weave-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:26.092539 227869 pod_ready.go:92] pod "kube-scheduler-custom-weave-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 08:59:26.092566 227869 pod_ready.go:81] duration metric: took 400.710732ms waiting for pod "kube-scheduler-custom-weave-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:26.092579 227869 pod_ready.go:78] waiting up to 5m0s for pod "weave-net-dgkzh" in "kube-system" namespace to be "Ready" ... I0221 08:59:24.850016 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:27.349859 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:27.559651 223679 system_pods.go:59] 9 kube-system pods found I0221 08:59:27.559689 223679 system_pods.go:61] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:27.559697 223679 system_pods.go:61] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:27.559703 223679 system_pods.go:61] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:27.559708 223679 system_pods.go:61] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:27.559713 223679 system_pods.go:61] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:27.559717 223679 system_pods.go:61] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:27.559722 223679 system_pods.go:61] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:27.559726 223679 system_pods.go:61] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:27.559734 223679 system_pods.go:61] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:27.559742 223679 system_pods.go:74] duration metric: took 4.313209437s to wait for pod list to return data ... I0221 08:59:27.559749 223679 default_sa.go:34] waiting for default service account to be created ... I0221 08:59:27.562671 223679 default_sa.go:45] found service account: "default" I0221 08:59:27.562697 223679 default_sa.go:55] duration metric: took 2.939018ms for default service account to be created ... I0221 08:59:27.562709 223679 system_pods.go:116] waiting for k8s-apps to be running ... I0221 08:59:27.606750 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:27.606791 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:27.606820 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:27.606832 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:27.606849 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:27.606856 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:27.606863 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:27.606870 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:27.606880 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:27.606889 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:27.606913 223679 retry.go:31] will retry after 263.082536ms: missing components: kube-dns I0221 08:59:27.875522 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:27.875558 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:27.875569 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:27.875575 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:27.875581 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:27.875586 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:27.875590 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:27.875593 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:27.875598 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:27.875603 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:27.875619 223679 retry.go:31] will retry after 381.329545ms: missing components: kube-dns I0221 08:59:28.262703 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:28.262737 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:28.262745 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:28.262752 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:28.262757 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:28.262764 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:28.262770 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:28.262776 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:28.262782 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:28.262789 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:28.262812 223679 retry.go:31] will retry after 422.765636ms: missing components: kube-dns I0221 08:59:28.708387 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:28.708425 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:28.708467 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:28.708488 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:28.708506 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:28.708519 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:28.708531 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:28.708537 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:28.708544 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:28.708559 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:28.708575 223679 retry.go:31] will retry after 473.074753ms: missing components: kube-dns I0221 08:59:29.187326 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:29.187359 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:29.187367 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:29.187374 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:29.187379 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:29.187384 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:29.187388 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:29.187392 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:29.187396 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:29.187401 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:29.187414 223679 retry.go:31] will retry after 587.352751ms: missing components: kube-dns I0221 08:59:29.807999 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:29.808041 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:29.808052 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:29.808062 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:29.808069 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:29.808077 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:29.808087 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:29.808093 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:29.808103 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:29.808113 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:29.808133 223679 retry.go:31] will retry after 834.206799ms: missing components: kube-dns I0221 08:59:30.649684 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:30.649731 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:30.649746 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:30.649756 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:30.649766 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:30.649778 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:30.649792 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:30.649806 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:30.649817 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:30.649831 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:30.649852 223679 retry.go:31] will retry after 746.553905ms: missing components: kube-dns I0221 08:59:31.403363 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:31.403414 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:31.403426 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:31.403438 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:31.403446 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:31.403455 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:31.403466 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:31.403474 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:31.403488 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:31.403498 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:31.403522 223679 retry.go:31] will retry after 987.362415ms: missing components: kube-dns I0221 08:59:28.498990 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:30.998871 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:29.848666 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:31.849001 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:32.397015 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:32.397055 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:32.397064 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:32.397075 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:32.397083 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:32.397090 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:32.397103 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:32.397110 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:32.397121 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:32.397132 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:32.397148 223679 retry.go:31] will retry after 1.189835008s: missing components: kube-dns I0221 08:59:33.607429 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:33.607467 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:33.607475 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:33.607484 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:33.607493 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:33.607500 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:33.607507 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:33.607531 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:33.607541 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:33.607550 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:33.607570 223679 retry.go:31] will retry after 1.677229867s: missing components: kube-dns I0221 08:59:35.291721 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:35.291757 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:35.291767 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:35.291776 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:35.291783 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:35.291792 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:35.291798 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:35.291809 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:35.291815 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:35.291826 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:35.291840 223679 retry.go:31] will retry after 2.346016261s: missing components: kube-dns I0221 08:59:33.499218 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:35.998834 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:34.349423 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:36.849024 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:37.644075 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:37.644109 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:37.644117 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:37.644124 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:37.644131 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:37.644136 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:37.644140 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:37.644144 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:37.644147 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:37.644153 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:37.644169 223679 retry.go:31] will retry after 3.36678925s: missing components: kube-dns I0221 08:59:41.020218 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:41.020262 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:41.020274 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:41.020284 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:41.020290 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:41.020296 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:41.020301 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:41.020307 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:41.020324 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:41.020332 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:41.020346 223679 retry.go:31] will retry after 3.11822781s: missing components: kube-dns I0221 08:59:38.498252 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:40.499308 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:39.349078 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:41.848438 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:44.146493 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:44.146526 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:44.146534 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:44.146544 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:44.146552 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:44.146563 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:44.146570 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:44.146582 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:44.146593 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:44.146603 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:44.146623 223679 retry.go:31] will retry after 4.276119362s: missing components: kube-dns I0221 08:59:42.998921 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:45.498291 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:44.348710 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:46.849283 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:48.850157 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:48.430784 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:48.430822 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:48.430855 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:48.430867 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:48.430880 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:48.430889 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:48.430901 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:48.430911 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:48.430921 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:48.430931 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:48.431005 223679 retry.go:31] will retry after 5.167232101s: missing components: kube-dns I0221 08:59:47.498914 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:49.998220 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:51.999087 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:51.349913 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:53.848663 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:53.607863 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:53.607910 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:53.607925 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:53.607936 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:53.607950 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:53.607957 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:53.607965 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:53.607971 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:53.607979 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:53.607991 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:53.608009 223679 retry.go:31] will retry after 6.994901864s: missing components: kube-dns I0221 08:59:53.999129 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:56.497881 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:55.849681 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:58.348890 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:00.608725 223679 system_pods.go:86] 9 kube-system pods found I0221 09:00:00.608757 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 09:00:00.608767 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 09:00:00.608774 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:00:00.608778 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 09:00:00.608783 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 09:00:00.608788 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 09:00:00.608791 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 09:00:00.608796 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 09:00:00.608801 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:00:00.608818 223679 retry.go:31] will retry after 7.91826225s: missing components: kube-dns I0221 08:59:58.498148 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:00.999242 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:00.349704 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:02.851497 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:03.498525 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:05.999154 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:05.348387 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:07.348675 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:08.534545 223679 system_pods.go:86] 9 kube-system pods found I0221 09:00:08.534589 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 09:00:08.534602 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 09:00:08.534613 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:00:08.534621 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 09:00:08.534630 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 09:00:08.534642 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 09:00:08.534654 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 09:00:08.534665 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 09:00:08.534678 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:00:08.534700 223679 retry.go:31] will retry after 9.953714808s: missing components: kube-dns I0221 09:00:08.498881 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:10.998464 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:09.349729 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:11.848467 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:13.848882 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:12.998682 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:14.999363 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:16.350910 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:18.848692 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:18.494832 223679 system_pods.go:86] 9 kube-system pods found I0221 09:00:18.494873 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 09:00:18.494884 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 09:00:18.494893 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:00:18.494898 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 09:00:18.494903 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 09:00:18.494909 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 09:00:18.494918 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 09:00:18.494925 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 09:00:18.494935 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:00:18.494956 223679 retry.go:31] will retry after 15.120437328s: missing components: kube-dns I0221 09:00:17.498767 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:19.499481 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:21.998971 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:20.849344 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:23.349381 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:24.499960 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:26.999269 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:25.849056 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:27.849318 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:29.499198 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:31.998892 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:30.349828 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:32.848757 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:33.622907 223679 system_pods.go:86] 9 kube-system pods found I0221 09:00:33.622950 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 09:00:33.622961 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 09:00:33.622970 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:00:33.622977 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 09:00:33.622983 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 09:00:33.622989 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 09:00:33.623036 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 09:00:33.623050 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 09:00:33.623058 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:00:33.623079 223679 retry.go:31] will retry after 14.90607158s: missing components: kube-dns I0221 09:00:33.999959 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:36.498439 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:34.848956 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:36.849066 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:38.849119 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:38.998551 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:40.998664 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:41.348585 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:43.349457 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:42.999010 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:45.498414 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:45.850967 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:48.349610 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:48.536869 223679 system_pods.go:86] 9 kube-system pods found I0221 09:00:48.536919 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 09:00:48.536931 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 09:00:48.536941 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:00:48.536949 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 09:00:48.536955 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 09:00:48.536959 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 09:00:48.536964 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 09:00:48.536968 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 09:00:48.536982 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running I0221 09:00:48.536998 223679 retry.go:31] will retry after 18.465989061s: missing components: kube-dns I0221 09:00:47.498620 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:49.998601 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:51.999470 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:50.849439 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:53.348792 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:54.499043 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:56.499562 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:55.348932 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:57.847995 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:58.998197 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:00.998372 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:59.848674 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:01:01.849363 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:01:02.999674 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:05.499244 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:04.348871 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:01:06.349795 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:01:08.849206 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:01:07.010825 223679 system_pods.go:86] 9 kube-system pods found I0221 09:01:07.010865 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 09:01:07.010877 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 09:01:07.010887 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:01:07.010895 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 09:01:07.010902 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 09:01:07.010908 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 09:01:07.010925 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 09:01:07.010931 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 09:01:07.010939 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running I0221 09:01:07.010960 223679 retry.go:31] will retry after 25.219510332s: missing components: kube-dns I0221 09:01:07.998930 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:10.499101 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:11.349117 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:01:13.848278 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:01:12.499436 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:14.998244 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:16.998957 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:15.849578 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:01:18.348555 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:01:19.499569 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:21.503811 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:20.349090 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:01:22.848149 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:01:23.998532 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:26.001410 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:25.348797 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:01:27.349734 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:01:28.497652 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:30.497882 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:29.848914 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:01:31.849118 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:01:33.850062 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:01:32.236004 223679 system_pods.go:86] 9 kube-system pods found I0221 09:01:32.236044 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 09:01:32.236056 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 09:01:32.236064 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:01:32.236072 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 09:01:32.236078 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 09:01:32.236084 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 09:01:32.236091 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 09:01:32.236097 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 09:01:32.236107 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:01:32.236125 223679 retry.go:31] will retry after 35.078569648s: missing components: kube-dns I0221 09:01:32.498505 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:34.499389 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:36.998781 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:34.352622 208829 pod_ready.go:81] duration metric: took 4m0.01541005s waiting for pod "coredns-64897985d-rg6k7" in "kube-system" namespace to be "Ready" ... E0221 09:01:34.352645 208829 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition I0221 09:01:34.352653 208829 pod_ready.go:78] waiting up to 5m0s for pod "etcd-auto-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:01:34.356337 208829 pod_ready.go:92] pod "etcd-auto-20220221084933-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:01:34.356357 208829 pod_ready.go:81] duration metric: took 3.698768ms waiting for pod "etcd-auto-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:01:34.356365 208829 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-auto-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:01:34.360068 208829 pod_ready.go:92] pod "kube-apiserver-auto-20220221084933-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:01:34.360086 208829 pod_ready.go:81] duration metric: took 3.71506ms waiting for pod "kube-apiserver-auto-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:01:34.360094 208829 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-auto-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:01:34.363833 208829 pod_ready.go:92] pod "kube-controller-manager-auto-20220221084933-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:01:34.363854 208829 pod_ready.go:81] duration metric: took 3.753995ms waiting for pod "kube-controller-manager-auto-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:01:34.363864 208829 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-j6t4r" in "kube-system" namespace to be "Ready" ... I0221 09:01:34.747517 208829 pod_ready.go:92] pod "kube-proxy-j6t4r" in "kube-system" namespace has status "Ready":"True" I0221 09:01:34.747544 208829 pod_ready.go:81] duration metric: took 383.671848ms waiting for pod "kube-proxy-j6t4r" in "kube-system" namespace to be "Ready" ... I0221 09:01:34.747559 208829 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-auto-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:01:35.147507 208829 pod_ready.go:92] pod "kube-scheduler-auto-20220221084933-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:01:35.147532 208829 pod_ready.go:81] duration metric: took 399.96592ms waiting for pod "kube-scheduler-auto-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:01:35.147543 208829 pod_ready.go:38] duration metric: took 4m2.842909165s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 09:01:35.147607 208829 api_server.go:51] waiting for apiserver process to appear ... I0221 09:01:35.147666 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0221 09:01:35.184048 208829 logs.go:274] 1 containers: [ee44803ab83a] I0221 09:01:35.184116 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0221 09:01:35.222133 208829 logs.go:274] 1 containers: [b23ee2bbc19d] I0221 09:01:35.222212 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0221 09:01:35.257673 208829 logs.go:274] 1 containers: [9ec110d5717f] I0221 09:01:35.257742 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0221 09:01:35.291205 208829 logs.go:274] 1 containers: [c78588822ac6] I0221 09:01:35.291278 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0221 09:01:35.324945 208829 logs.go:274] 1 containers: [76924ebff838] I0221 09:01:35.325015 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0221 09:01:35.359782 208829 logs.go:274] 0 containers: [] W0221 09:01:35.359804 208829 logs.go:276] No container was found matching "kubernetes-dashboard" I0221 09:01:35.359842 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0221 09:01:35.393096 208829 logs.go:274] 1 containers: [1cd0b722c1ad] I0221 09:01:35.393159 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0221 09:01:35.428486 208829 logs.go:274] 1 containers: [0bb1b94ca5a9] I0221 09:01:35.428556 208829 logs.go:123] Gathering logs for coredns [9ec110d5717f] ... I0221 09:01:35.428576 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ec110d5717f" I0221 09:01:35.465039 208829 logs.go:123] Gathering logs for kube-scheduler [c78588822ac6] ... I0221 09:01:35.465067 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78588822ac6" I0221 09:01:35.508531 208829 logs.go:123] Gathering logs for kube-proxy [76924ebff838] ... I0221 09:01:35.508563 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76924ebff838" I0221 09:01:35.544059 208829 logs.go:123] Gathering logs for kube-apiserver [ee44803ab83a] ... I0221 09:01:35.544087 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee44803ab83a" I0221 09:01:35.589418 208829 logs.go:123] Gathering logs for etcd [b23ee2bbc19d] ... I0221 09:01:35.589467 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b23ee2bbc19d" I0221 09:01:35.638600 208829 logs.go:123] Gathering logs for describe nodes ... I0221 09:01:35.638638 208829 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0221 09:01:35.726202 208829 logs.go:123] Gathering logs for storage-provisioner [1cd0b722c1ad] ... I0221 09:01:35.726239 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cd0b722c1ad" I0221 09:01:35.768986 208829 logs.go:123] Gathering logs for kube-controller-manager [0bb1b94ca5a9] ... I0221 09:01:35.769016 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bb1b94ca5a9" I0221 09:01:35.823631 208829 logs.go:123] Gathering logs for Docker ... I0221 09:01:35.823668 208829 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0221 09:01:35.842264 208829 logs.go:123] Gathering logs for container status ... I0221 09:01:35.842298 208829 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0221 09:01:35.879259 208829 logs.go:123] Gathering logs for kubelet ... I0221 09:01:35.879298 208829 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0221 09:01:35.948001 208829 logs.go:123] Gathering logs for dmesg ... I0221 09:01:35.948047 208829 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0221 09:01:38.481944 208829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0221 09:01:38.504799 208829 api_server.go:71] duration metric: took 4m6.447340023s to wait for apiserver process to appear ... I0221 09:01:38.504830 208829 api_server.go:87] waiting for apiserver healthz status ... I0221 09:01:38.504879 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0221 09:01:38.537954 208829 logs.go:274] 1 containers: [ee44803ab83a] I0221 09:01:38.538037 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0221 09:01:38.571333 208829 logs.go:274] 1 containers: [b23ee2bbc19d] I0221 09:01:38.571405 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0221 09:01:38.604685 208829 logs.go:274] 1 containers: [9ec110d5717f] I0221 09:01:38.604755 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0221 09:01:38.638264 208829 logs.go:274] 1 containers: [c78588822ac6] I0221 09:01:38.638348 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0221 09:01:38.673235 208829 logs.go:274] 1 containers: [76924ebff838] I0221 09:01:38.673305 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0221 09:01:38.706125 208829 logs.go:274] 0 containers: [] W0221 09:01:38.706156 208829 logs.go:276] No container was found matching "kubernetes-dashboard" I0221 09:01:38.706205 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0221 09:01:38.739965 208829 logs.go:274] 1 containers: [1cd0b722c1ad] I0221 09:01:38.740043 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0221 09:01:38.773046 208829 logs.go:274] 1 containers: [0bb1b94ca5a9] I0221 09:01:38.773090 208829 logs.go:123] Gathering logs for dmesg ... I0221 09:01:38.773105 208829 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0221 09:01:38.807138 208829 logs.go:123] Gathering logs for coredns [9ec110d5717f] ... I0221 09:01:38.807175 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ec110d5717f" I0221 09:01:38.850852 208829 logs.go:123] Gathering logs for kubelet ... I0221 09:01:38.850885 208829 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0221 09:01:38.915403 208829 logs.go:123] Gathering logs for describe nodes ... I0221 09:01:38.915466 208829 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0221 09:01:39.005837 208829 logs.go:123] Gathering logs for kube-apiserver [ee44803ab83a] ... I0221 09:01:39.005870 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee44803ab83a" I0221 09:01:39.059582 208829 logs.go:123] Gathering logs for etcd [b23ee2bbc19d] ... I0221 09:01:39.059627 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b23ee2bbc19d" I0221 09:01:39.106453 208829 logs.go:123] Gathering logs for kube-scheduler [c78588822ac6] ... I0221 09:01:39.106489 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78588822ac6" I0221 09:01:39.159258 208829 logs.go:123] Gathering logs for kube-proxy [76924ebff838] ... I0221 09:01:39.159304 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76924ebff838" I0221 09:01:39.195407 208829 logs.go:123] Gathering logs for storage-provisioner [1cd0b722c1ad] ... I0221 09:01:39.195434 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cd0b722c1ad" I0221 09:01:39.497987 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:41.999075 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:39.237051 208829 logs.go:123] Gathering logs for kube-controller-manager [0bb1b94ca5a9] ... I0221 09:01:39.237078 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bb1b94ca5a9" I0221 09:01:39.289740 208829 logs.go:123] Gathering logs for Docker ... I0221 09:01:39.289779 208829 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0221 09:01:39.309121 208829 logs.go:123] Gathering logs for container status ... I0221 09:01:39.309166 208829 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0221 09:01:41.850261 208829 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ... I0221 09:01:41.855142 208829 api_server.go:266] https://192.168.76.2:8443/healthz returned 200: ok I0221 09:01:41.856107 208829 api_server.go:140] control plane version: v1.23.4 I0221 09:01:41.856131 208829 api_server.go:130] duration metric: took 3.351295129s to wait for apiserver health ... I0221 09:01:41.856140 208829 system_pods.go:43] waiting for kube-system pods to appear ... I0221 09:01:41.856194 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0221 09:01:41.890006 208829 logs.go:274] 1 containers: [ee44803ab83a] I0221 09:01:41.890088 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0221 09:01:41.923013 208829 logs.go:274] 1 containers: [b23ee2bbc19d] I0221 09:01:41.923093 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0221 09:01:41.958916 208829 logs.go:274] 1 containers: [9ec110d5717f] I0221 09:01:41.958990 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0221 09:01:41.994619 208829 logs.go:274] 1 containers: [c78588822ac6] I0221 09:01:41.994705 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0221 09:01:42.037650 208829 logs.go:274] 1 containers: [76924ebff838] I0221 09:01:42.037726 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0221 09:01:42.075743 208829 logs.go:274] 0 containers: [] W0221 09:01:42.075768 208829 logs.go:276] No container was found matching "kubernetes-dashboard" I0221 09:01:42.075820 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0221 09:01:42.118071 208829 logs.go:274] 1 containers: [1cd0b722c1ad] I0221 09:01:42.118163 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0221 09:01:42.159633 208829 logs.go:274] 1 containers: [0bb1b94ca5a9] I0221 09:01:42.159684 208829 logs.go:123] Gathering logs for describe nodes ... I0221 09:01:42.159700 208829 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0221 09:01:42.252178 208829 logs.go:123] Gathering logs for kube-apiserver [ee44803ab83a] ... I0221 09:01:42.252212 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee44803ab83a" I0221 09:01:42.298061 208829 logs.go:123] Gathering logs for etcd [b23ee2bbc19d] ... I0221 09:01:42.298092 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b23ee2bbc19d" I0221 09:01:42.348980 208829 logs.go:123] Gathering logs for kube-scheduler [c78588822ac6] ... I0221 09:01:42.349015 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78588822ac6" I0221 09:01:42.394629 208829 logs.go:123] Gathering logs for kube-proxy [76924ebff838] ... I0221 09:01:42.394665 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76924ebff838" I0221 09:01:42.435725 208829 logs.go:123] Gathering logs for storage-provisioner [1cd0b722c1ad] ... I0221 09:01:42.435765 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cd0b722c1ad" I0221 09:01:42.475586 208829 logs.go:123] Gathering logs for kubelet ... I0221 09:01:42.475618 208829 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0221 09:01:42.539602 208829 logs.go:123] Gathering logs for coredns [9ec110d5717f] ... I0221 09:01:42.539644 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ec110d5717f" I0221 09:01:42.586017 208829 logs.go:123] Gathering logs for kube-controller-manager [0bb1b94ca5a9] ... I0221 09:01:42.586047 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bb1b94ca5a9" I0221 09:01:42.639424 208829 logs.go:123] Gathering logs for Docker ... I0221 09:01:42.639458 208829 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0221 09:01:42.658416 208829 logs.go:123] Gathering logs for container status ... I0221 09:01:42.658457 208829 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0221 09:01:42.691083 208829 logs.go:123] Gathering logs for dmesg ... I0221 09:01:42.691113 208829 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0221 09:01:45.232111 208829 system_pods.go:59] 7 kube-system pods found I0221 09:01:45.232150 208829 system_pods.go:61] "coredns-64897985d-rg6k7" [b5b504ee-2e2d-4f88-84b8-ce018dbb6549] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:01:45.232157 208829 system_pods.go:61] "etcd-auto-20220221084933-6550" [c25df683-b01e-4f2a-8e47-7a1409996649] Running I0221 09:01:45.232162 208829 system_pods.go:61] "kube-apiserver-auto-20220221084933-6550" [ae612da3-338a-40de-98fd-f627bf47483f] Running I0221 09:01:45.232166 208829 system_pods.go:61] "kube-controller-manager-auto-20220221084933-6550" [cf06723d-2296-4a7f-a9fc-f5c629f0c7aa] Running I0221 09:01:45.232171 208829 system_pods.go:61] "kube-proxy-j6t4r" [eb672423-9289-4e70-93e6-75fa71e1c263] Running I0221 09:01:45.232175 208829 system_pods.go:61] "kube-scheduler-auto-20220221084933-6550" [7a81e5fe-13d9-4994-9a6d-e0da219b2414] Running I0221 09:01:45.232183 208829 system_pods.go:61] "storage-provisioner" [cb2b449c-788d-4efb-9f51-1de24e609c8b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:01:45.232191 208829 system_pods.go:74] duration metric: took 3.376043984s to wait for pod list to return data ... I0221 09:01:45.232206 208829 default_sa.go:34] waiting for default service account to be created ... I0221 09:01:45.234739 208829 default_sa.go:45] found service account: "default" I0221 09:01:45.234761 208829 default_sa.go:55] duration metric: took 2.545741ms for default service account to be created ... I0221 09:01:45.234768 208829 system_pods.go:116] waiting for k8s-apps to be running ... I0221 09:01:45.238869 208829 system_pods.go:86] 7 kube-system pods found I0221 09:01:45.238897 208829 system_pods.go:89] "coredns-64897985d-rg6k7" [b5b504ee-2e2d-4f88-84b8-ce018dbb6549] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:01:45.238903 208829 system_pods.go:89] "etcd-auto-20220221084933-6550" [c25df683-b01e-4f2a-8e47-7a1409996649] Running I0221 09:01:45.238908 208829 system_pods.go:89] "kube-apiserver-auto-20220221084933-6550" [ae612da3-338a-40de-98fd-f627bf47483f] Running I0221 09:01:45.238912 208829 system_pods.go:89] "kube-controller-manager-auto-20220221084933-6550" [cf06723d-2296-4a7f-a9fc-f5c629f0c7aa] Running I0221 09:01:45.238916 208829 system_pods.go:89] "kube-proxy-j6t4r" [eb672423-9289-4e70-93e6-75fa71e1c263] Running I0221 09:01:45.238920 208829 system_pods.go:89] "kube-scheduler-auto-20220221084933-6550" [7a81e5fe-13d9-4994-9a6d-e0da219b2414] Running I0221 09:01:45.238950 208829 system_pods.go:89] "storage-provisioner" [cb2b449c-788d-4efb-9f51-1de24e609c8b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:01:45.238960 208829 system_pods.go:126] duration metric: took 4.188514ms to wait for k8s-apps to be running ... I0221 09:01:45.238966 208829 system_svc.go:44] waiting for kubelet service to be running .... I0221 09:01:45.239044 208829 ssh_runner.go:195] Run: sudo service kubelet status I0221 09:01:45.258078 208829 system_svc.go:56] duration metric: took 19.104209ms WaitForService to wait for kubelet. I0221 09:01:45.258115 208829 kubeadm.go:548] duration metric: took 4m13.200661633s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ... I0221 09:01:45.258142 208829 node_conditions.go:102] verifying NodePressure condition ... I0221 09:01:45.263650 208829 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki I0221 09:01:45.263679 208829 node_conditions.go:123] node cpu capacity is 8 I0221 09:01:45.263695 208829 node_conditions.go:105] duration metric: took 5.547069ms to run NodePressure ... I0221 09:01:45.263705 208829 start.go:213] waiting for startup goroutines ... I0221 09:01:45.306637 208829 start.go:496] kubectl: 1.23.4, cluster: 1.23.4 (minor skew: 0) I0221 09:01:45.310494 208829 out.go:176] * Done! kubectl is now configured to use "auto-20220221084933-6550" cluster and "default" namespace by default I0221 09:01:43.999131 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:45.999453 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:48.498612 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:50.502349 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:53.000328 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:55.498350 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:57.498897 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:59.998589 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:02.498112 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:04.499166 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:06.499366 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:07.320903 223679 system_pods.go:86] 9 kube-system pods found I0221 09:02:07.320944 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 09:02:07.320955 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 09:02:07.320961 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:02:07.320967 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 09:02:07.320973 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 09:02:07.320977 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 09:02:07.320981 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 09:02:07.320985 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 09:02:07.320990 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:02:07.321002 223679 retry.go:31] will retry after 50.027701973s: missing components: kube-dns I0221 09:02:08.998138 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:10.998798 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:12.998867 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:14.999708 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:17.499134 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:19.998038 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:21.999415 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:24.503262 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:26.998872 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:28.999023 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:31.498312 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:33.498493 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:35.999270 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" * * ==> Docker <== * -- Logs begin at Mon 2022-02-21 08:55:41 UTC, end at Mon 2022-02-21 09:02:41 UTC. -- Feb 21 08:55:43 false-20220221084934-6550 systemd[1]: Stopped Docker Application Container Engine. Feb 21 08:55:43 false-20220221084934-6550 systemd[1]: Starting Docker Application Container Engine... Feb 21 08:55:43 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:55:43.641972110Z" level=info msg="Starting up" Feb 21 08:55:43 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:55:43.644349551Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 21 08:55:43 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:55:43.644393261Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 21 08:55:43 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:55:43.644433556Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Feb 21 08:55:43 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:55:43.644451679Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 21 08:55:43 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:55:43.645804236Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 21 08:55:43 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:55:43.645939514Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 21 08:55:43 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:55:43.645973818Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Feb 21 08:55:43 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:55:43.645984191Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 21 08:55:43 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:55:43.652317916Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Feb 21 08:55:43 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:55:43.658136600Z" level=warning msg="Your kernel does not support CPU realtime scheduler" Feb 21 08:55:43 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:55:43.658167911Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 21 08:55:43 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:55:43.658175093Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 21 08:55:43 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:55:43.658365740Z" level=info msg="Loading containers: start." Feb 21 08:55:43 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:55:43.770668825Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 21 08:55:43 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:55:43.814491493Z" level=info msg="Loading containers: done." Feb 21 08:55:43 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:55:43.832796730Z" level=info msg="Docker daemon" commit=459d0df graphdriver(s)=overlay2 version=20.10.12 Feb 21 08:55:43 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:55:43.832893339Z" level=info msg="Daemon has completed initialization" Feb 21 08:55:43 false-20220221084934-6550 systemd[1]: Started Docker Application Container Engine. Feb 21 08:55:43 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:55:43.854247043Z" level=info msg="API listen on [::]:2376" Feb 21 08:55:43 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:55:43.857933661Z" level=info msg="API listen on /var/run/docker.sock" Feb 21 08:56:17 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:56:17.743295566Z" level=info msg="ignoring event" container=f94f7f392e736bd87536f0490e3bf34465601e46c8d0581bbbe87d7fd75543c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 08:56:17 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:56:17.794346836Z" level=info msg="ignoring event" container=e6c7cf2ddcf6c41555cce331d7cd9cd5d0c46cf25daa9b590b194449b67d31c8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID ea86a1d35b73f k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 6 minutes ago Running dnsutils 0 9d884bdb5ec49 5ea2efd751380 6e38f40d628db 6 minutes ago Running storage-provisioner 0 3329154f839c3 d912cc0c981d4 a4ca41631cc7a 6 minutes ago Running coredns 0 654d30a3d4079 8a0c30ea7fd7c 2114245ec4d6b 6 minutes ago Running kube-proxy 0 1ebf20a1a27fc d7932880a27cd aceacb6244f9f 6 minutes ago Running kube-scheduler 0 30704c112d028 2187c92e487ba 25f8c7f3da61c 6 minutes ago Running etcd 0 6bf3b50bb5eb4 9457fb7075229 62930710c9634 6 minutes ago Running kube-apiserver 0 e6986cb941737 a7e7eaacf8427 25444908517a5 6 minutes ago Running kube-controller-manager 0 4fda3c01a2916 * * ==> coredns [d912cc0c981d] <== * [INFO] plugin/ready: Still waiting on: "kubernetes" .:53 [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7 CoreDNS-1.8.6 linux/amd64, go1.17.1, 13a9191 [INFO] Reloading [INFO] plugin/health: Going into lameduck mode for 5s [INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031 [INFO] Reloading complete * * ==> describe nodes <== * Name: false-20220221084934-6550 Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=false-20220221084934-6550 kubernetes.io/os=linux minikube.k8s.io/commit=9a4c39b14752448ee8b5da7f8bb397c2a16d9ea9 minikube.k8s.io/name=false-20220221084934-6550 minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2022_02_21T08_55_57_0700 minikube.k8s.io/version=v1.25.1 node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 21 Feb 2022 08:55:54 +0000 Taints: Unschedulable: false Lease: HolderIdentity: false-20220221084934-6550 AcquireTime: RenewTime: Mon, 21 Feb 2022 09:02:35 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Mon, 21 Feb 2022 09:01:35 +0000 Mon, 21 Feb 2022 08:55:51 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Mon, 21 Feb 2022 09:01:35 +0000 Mon, 21 Feb 2022 08:55:51 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Mon, 21 Feb 2022 09:01:35 +0000 Mon, 21 Feb 2022 08:55:51 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Mon, 21 Feb 2022 09:01:35 +0000 Mon, 21 Feb 2022 08:56:08 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.49.2 Hostname: false-20220221084934-6550 Capacity: cpu: 8 ephemeral-storage: 304695084Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32874648Ki pods: 110 Allocatable: cpu: 8 ephemeral-storage: 304695084Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32874648Ki pods: 110 System Info: Machine ID: b6a262faae404a5db719705fd34b5c8b System UUID: 082ec138-1616-4f1c-85e0-734b853b620f Boot ID: 36f9c729-2a96-4807-bb74-314dc2113999 Kernel Version: 5.11.0-1029-gcp OS Image: Ubuntu 20.04.2 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.12 Kubelet Version: v1.23.4 Kube-Proxy Version: v1.23.4 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (8 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- default netcat-668db85669-gl7hj 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m26s kube-system coredns-64897985d-9k8b6 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 6m31s kube-system etcd-false-20220221084934-6550 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 6m44s kube-system kube-apiserver-false-20220221084934-6550 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m44s kube-system kube-controller-manager-false-20220221084934-6550 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m45s kube-system kube-proxy-mlfhq 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m32s kube-system kube-scheduler-false-20220221084934-6550 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m44s kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m29s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 750m (9%!)(MISSING) 0 (0%!)(MISSING) memory 170Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 6m30s kube-proxy Normal NodeHasSufficientMemory 6m52s (x4 over 6m53s) kubelet Node false-20220221084934-6550 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 6m52s (x3 over 6m53s) kubelet Node false-20220221084934-6550 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 6m52s (x3 over 6m53s) kubelet Node false-20220221084934-6550 status is now: NodeHasSufficientPID Normal Starting 6m45s kubelet Starting kubelet. Normal NodeHasNoDiskPressure 6m45s kubelet Node false-20220221084934-6550 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 6m45s kubelet Node false-20220221084934-6550 status is now: NodeHasSufficientPID Normal NodeHasSufficientMemory 6m45s kubelet Node false-20220221084934-6550 status is now: NodeHasSufficientMemory Normal NodeNotReady 6m44s kubelet Node false-20220221084934-6550 status is now: NodeNotReady Normal NodeAllocatableEnforced 6m44s kubelet Updated Node Allocatable limit across pods Normal NodeReady 6m34s kubelet Node false-20220221084934-6550 status is now: NodeReady * * ==> dmesg <== * [ +0.000009] ll header: 00000000: ff ff ff ff ff ff d2 bf dd f8 dd 25 08 06 [ +3.033891] IPv4: martian source 10.85.0.141 from 10.85.0.141, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 de 02 bf 6b fe 08 06 [ +3.108367] IPv4: martian source 10.85.0.142 from 10.85.0.142, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 bd 92 b1 df 50 08 06 [ +3.036056] IPv4: martian source 10.85.0.143 from 10.85.0.143, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 1f 60 a8 09 4e 08 06 [ +2.954252] IPv4: martian source 10.85.0.144 from 10.85.0.144, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff fa 43 39 ae e2 13 08 06 [ +3.203300] IPv4: martian source 10.85.0.145 from 10.85.0.145, on dev eth0 [ +0.000005] ll header: 00000000: ff ff ff ff ff ff 16 3e 0e a7 c7 cc 08 06 [ +2.484933] IPv4: martian source 10.85.0.146 from 10.85.0.146, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 67 74 76 d8 af 08 06 [ +2.531504] IPv4: martian source 10.85.0.147 from 10.85.0.147, on dev eth0 [ +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e cc b7 0a 27 7e 08 06 [ +3.156388] IPv4: martian source 10.85.0.148 from 10.85.0.148, on dev eth0 [ +0.000008] ll header: 00000000: ff ff ff ff ff ff f6 3a c4 2f a5 8f 08 06 [ +2.783142] IPv4: martian source 10.85.0.149 from 10.85.0.149, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 2e 75 c1 1f e5 08 06 [ +3.065560] IPv4: martian source 10.85.0.150 from 10.85.0.150, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 88 c1 5c 06 a5 08 06 [ +3.173096] IPv4: martian source 10.85.0.151 from 10.85.0.151, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e b2 3f 08 7a 3c 08 06 [ +2.513515] IPv4: martian source 10.85.0.152 from 10.85.0.152, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e bc d4 ae 6d 61 08 06 * * ==> etcd [2187c92e487b] <== * {"level":"info","ts":"2022-02-21T08:55:51.131Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2022-02-21T08:55:51.131Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"} {"level":"info","ts":"2022-02-21T08:55:51.132Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"} {"level":"info","ts":"2022-02-21T08:55:51.132Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]} {"level":"info","ts":"2022-02-21T08:55:51.132Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"} {"level":"info","ts":"2022-02-21T08:55:52.019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"} {"level":"info","ts":"2022-02-21T08:55:52.019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"} {"level":"info","ts":"2022-02-21T08:55:52.019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"} {"level":"info","ts":"2022-02-21T08:55:52.019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"} {"level":"info","ts":"2022-02-21T08:55:52.019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"} {"level":"info","ts":"2022-02-21T08:55:52.019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"} {"level":"info","ts":"2022-02-21T08:55:52.019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"} {"level":"info","ts":"2022-02-21T08:55:52.019Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T08:55:52.019Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:false-20220221084934-6550 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"} {"level":"info","ts":"2022-02-21T08:55:52.019Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2022-02-21T08:55:52.019Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2022-02-21T08:55:52.019Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"} {"level":"info","ts":"2022-02-21T08:55:52.020Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"} {"level":"info","ts":"2022-02-21T08:55:52.020Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T08:55:52.020Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T08:55:52.020Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T08:55:52.021Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"} {"level":"info","ts":"2022-02-21T08:55:52.021Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"} {"level":"warn","ts":"2022-02-21T08:56:58.930Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"198.564035ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-02-21T08:56:58.931Z","caller":"traceutil/trace.go:171","msg":"trace[1439317579] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:550; }","duration":"198.721248ms","start":"2022-02-21T08:56:58.732Z","end":"2022-02-21T08:56:58.930Z","steps":["trace[1439317579] 'range keys from in-memory index tree' (duration: 198.422442ms)"],"step_count":1} * * ==> kernel <== * 09:02:42 up 45 min, 0 users, load average: 4.35, 4.42, 3.54 Linux false-20220221084934-6550 5.11.0-1029-gcp #33~20.04.3-Ubuntu SMP Tue Jan 18 12:03:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.2 LTS" * * ==> kube-apiserver [9457fb707522] <== * I0221 08:55:54.765251 1 apf_controller.go:322] Running API Priority and Fairness config worker I0221 08:55:54.765269 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0221 08:55:54.765256 1 cache.go:39] Caches are synced for autoregister controller I0221 08:55:54.768869 1 cache.go:39] Caches are synced for AvailableConditionController controller I0221 08:55:54.802059 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller I0221 08:55:54.802111 1 shared_informer.go:247] Caches are synced for crd-autoregister I0221 08:55:55.665387 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0221 08:55:55.665422 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0221 08:55:55.681704 1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000 I0221 08:55:55.685033 1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000 I0221 08:55:55.685056 1 storage_scheduling.go:109] all system priority classes are created successfully or already exist. I0221 08:55:56.157792 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0221 08:55:56.193703 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0221 08:55:56.330378 1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1] W0221 08:55:56.335482 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2] I0221 08:55:56.336420 1 controller.go:611] quota admission added evaluator for: endpoints I0221 08:55:56.340066 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io I0221 08:55:56.827742 1 controller.go:611] quota admission added evaluator for: serviceaccounts I0221 08:55:57.513454 1 controller.go:611] quota admission added evaluator for: deployments.apps I0221 08:55:57.521139 1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10] I0221 08:55:57.531819 1 controller.go:611] quota admission added evaluator for: daemonsets.apps I0221 08:56:10.881594 1 controller.go:611] quota admission added evaluator for: replicasets.apps I0221 08:56:10.930759 1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps I0221 08:56:12.024488 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io I0221 08:56:16.361946 1 alloc.go:329] "allocated clusterIPs" service="default/netcat" clusterIPs=map[IPv4:10.106.71.151] * * ==> kube-controller-manager [a7e7eaacf842] <== * I0221 08:56:10.276301 1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: I0221 08:56:10.276312 1 shared_informer.go:247] Caches are synced for endpoint_slice W0221 08:56:10.276375 1 node_lifecycle_controller.go:1012] Missing timestamp for Node false-20220221084934-6550. Assuming now as a timestamp. I0221 08:56:10.276390 1 taint_manager.go:187] "Starting NoExecuteTaintManager" I0221 08:56:10.276405 1 shared_informer.go:247] Caches are synced for stateful set I0221 08:56:10.276458 1 event.go:294] "Event occurred" object="false-20220221084934-6550" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node false-20220221084934-6550 event: Registered Node false-20220221084934-6550 in Controller" I0221 08:56:10.276466 1 node_lifecycle_controller.go:1213] Controller detected that zone is now in state Normal. I0221 08:56:10.286454 1 shared_informer.go:247] Caches are synced for resource quota I0221 08:56:10.294638 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring I0221 08:56:10.311994 1 shared_informer.go:247] Caches are synced for endpoint I0221 08:56:10.328263 1 shared_informer.go:247] Caches are synced for resource quota I0221 08:56:10.376678 1 shared_informer.go:247] Caches are synced for cronjob I0221 08:56:10.376713 1 shared_informer.go:247] Caches are synced for job I0221 08:56:10.376689 1 shared_informer.go:247] Caches are synced for TTL after finished I0221 08:56:10.747622 1 shared_informer.go:247] Caches are synced for garbage collector I0221 08:56:10.776595 1 shared_informer.go:247] Caches are synced for garbage collector I0221 08:56:10.776620 1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0221 08:56:10.885251 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2" I0221 08:56:10.936391 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mlfhq" I0221 08:56:11.012442 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1" I0221 08:56:11.132460 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-snkv2" I0221 08:56:11.135927 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-9k8b6" I0221 08:56:11.153674 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-snkv2" I0221 08:56:16.364324 1 event.go:294] "Event occurred" object="default/netcat" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set netcat-668db85669 to 1" I0221 08:56:16.371476 1 event.go:294] "Event occurred" object="default/netcat-668db85669" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: netcat-668db85669-gl7hj" * * ==> kube-proxy [8a0c30ea7fd7] <== * I0221 08:56:11.827853 1 node.go:163] Successfully retrieved node IP: 192.168.49.2 I0221 08:56:11.827946 1 server_others.go:138] "Detected node IP" address="192.168.49.2" I0221 08:56:11.827994 1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode="" I0221 08:56:12.009554 1 server_others.go:206] "Using iptables Proxier" I0221 08:56:12.019908 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4 I0221 08:56:12.019933 1 server_others.go:214] "Creating dualStackProxier for iptables" I0221 08:56:12.019962 1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6" I0221 08:56:12.020760 1 server.go:656] "Version info" version="v1.23.4" I0221 08:56:12.021637 1 config.go:226] "Starting endpoint slice config controller" I0221 08:56:12.021666 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I0221 08:56:12.021800 1 config.go:317] "Starting service config controller" I0221 08:56:12.021806 1 shared_informer.go:240] Waiting for caches to sync for service config I0221 08:56:12.122213 1 shared_informer.go:247] Caches are synced for service config I0221 08:56:12.122323 1 shared_informer.go:247] Caches are synced for endpoint slice config * * ==> kube-scheduler [d7932880a27c] <== * W0221 08:55:54.734855 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0221 08:55:54.735560 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0221 08:55:54.735489 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0221 08:55:54.735587 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0221 08:55:54.735153 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0221 08:55:54.735599 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0221 08:55:54.736340 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0221 08:55:54.736398 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0221 08:55:55.648728 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0221 08:55:55.648772 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0221 08:55:55.813293 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0221 08:55:55.813330 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0221 08:55:55.824829 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0221 08:55:55.824882 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0221 08:55:55.845490 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0221 08:55:55.845529 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0221 08:55:55.878719 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0221 08:55:55.878760 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0221 08:55:55.928333 1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0221 08:55:55.928442 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0221 08:55:56.004454 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0221 08:55:56.004493 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0221 08:55:56.004454 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0221 08:55:56.004520 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope I0221 08:55:58.230940 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Mon 2022-02-21 08:55:41 UTC, end at Mon 2022-02-21 09:02:42 UTC. -- Feb 21 08:56:12 false-20220221084934-6550 kubelet[1923]: I0221 08:56:12.409928 1923 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-9k8b6 through plugin: invalid network status for" Feb 21 08:56:12 false-20220221084934-6550 kubelet[1923]: I0221 08:56:12.664199 1923 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-snkv2 through plugin: invalid network status for" Feb 21 08:56:12 false-20220221084934-6550 kubelet[1923]: I0221 08:56:12.672723 1923 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-9k8b6 through plugin: invalid network status for" Feb 21 08:56:13 false-20220221084934-6550 kubelet[1923]: I0221 08:56:13.303997 1923 topology_manager.go:200] "Topology Admit Handler" Feb 21 08:56:13 false-20220221084934-6550 kubelet[1923]: I0221 08:56:13.320731 1923 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e58a0e76-397e-4653-82c8-a63621513203-tmp\") pod \"storage-provisioner\" (UID: \"e58a0e76-397e-4653-82c8-a63621513203\") " pod="kube-system/storage-provisioner" Feb 21 08:56:13 false-20220221084934-6550 kubelet[1923]: I0221 08:56:13.320776 1923 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmqwk\" (UniqueName: \"kubernetes.io/projected/e58a0e76-397e-4653-82c8-a63621513203-kube-api-access-hmqwk\") pod \"storage-provisioner\" (UID: \"e58a0e76-397e-4653-82c8-a63621513203\") " pod="kube-system/storage-provisioner" Feb 21 08:56:13 false-20220221084934-6550 kubelet[1923]: I0221 08:56:13.783913 1923 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="3329154f839c36499d1d5650e062fdeeefc2b87a21cdce53605eb8cc5deab440" Feb 21 08:56:16 false-20220221084934-6550 kubelet[1923]: I0221 08:56:16.375695 1923 topology_manager.go:200] "Topology Admit Handler" Feb 21 08:56:16 false-20220221084934-6550 kubelet[1923]: I0221 08:56:16.438066 1923 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcfgc\" (UniqueName: \"kubernetes.io/projected/ba6605ea-dfed-40ce-83bd-cbd1b3c35da1-kube-api-access-mcfgc\") pod \"netcat-668db85669-gl7hj\" (UID: \"ba6605ea-dfed-40ce-83bd-cbd1b3c35da1\") " pod="default/netcat-668db85669-gl7hj" Feb 21 08:56:17 false-20220221084934-6550 kubelet[1923]: I0221 08:56:17.017545 1923 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/netcat-668db85669-gl7hj through plugin: invalid network status for" Feb 21 08:56:17 false-20220221084934-6550 kubelet[1923]: I0221 08:56:17.017821 1923 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="9d884bdb5ec490729eecb9c8241e2a0b987899f13fac076c2daff26fe1d6cfb2" Feb 21 08:56:17 false-20220221084934-6550 kubelet[1923]: I0221 08:56:17.949572 1923 reconciler.go:192] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ca2a7a8-2903-47ca-bcf3-097175f8bc79-config-volume\") pod \"2ca2a7a8-2903-47ca-bcf3-097175f8bc79\" (UID: \"2ca2a7a8-2903-47ca-bcf3-097175f8bc79\") " Feb 21 08:56:17 false-20220221084934-6550 kubelet[1923]: I0221 08:56:17.949640 1923 reconciler.go:192] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2hhn\" (UniqueName: \"kubernetes.io/projected/2ca2a7a8-2903-47ca-bcf3-097175f8bc79-kube-api-access-x2hhn\") pod \"2ca2a7a8-2903-47ca-bcf3-097175f8bc79\" (UID: \"2ca2a7a8-2903-47ca-bcf3-097175f8bc79\") " Feb 21 08:56:17 false-20220221084934-6550 kubelet[1923]: W0221 08:56:17.949904 1923 empty_dir.go:517] Warning: Failed to clear quota on /var/lib/kubelet/pods/2ca2a7a8-2903-47ca-bcf3-097175f8bc79/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled Feb 21 08:56:17 false-20220221084934-6550 kubelet[1923]: I0221 08:56:17.951222 1923 operation_generator.go:910] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ca2a7a8-2903-47ca-bcf3-097175f8bc79-config-volume" (OuterVolumeSpecName: "config-volume") pod "2ca2a7a8-2903-47ca-bcf3-097175f8bc79" (UID: "2ca2a7a8-2903-47ca-bcf3-097175f8bc79"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:56:17 false-20220221084934-6550 kubelet[1923]: I0221 08:56:17.952198 1923 operation_generator.go:910] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ca2a7a8-2903-47ca-bcf3-097175f8bc79-kube-api-access-x2hhn" (OuterVolumeSpecName: "kube-api-access-x2hhn") pod "2ca2a7a8-2903-47ca-bcf3-097175f8bc79" (UID: "2ca2a7a8-2903-47ca-bcf3-097175f8bc79"). InnerVolumeSpecName "kube-api-access-x2hhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:56:18 false-20220221084934-6550 kubelet[1923]: I0221 08:56:18.032519 1923 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/netcat-668db85669-gl7hj through plugin: invalid network status for" Feb 21 08:56:18 false-20220221084934-6550 kubelet[1923]: I0221 08:56:18.032579 1923 scope.go:110] "RemoveContainer" containerID="f94f7f392e736bd87536f0490e3bf34465601e46c8d0581bbbe87d7fd75543c2" Feb 21 08:56:18 false-20220221084934-6550 kubelet[1923]: I0221 08:56:18.046523 1923 scope.go:110] "RemoveContainer" containerID="f94f7f392e736bd87536f0490e3bf34465601e46c8d0581bbbe87d7fd75543c2" Feb 21 08:56:18 false-20220221084934-6550 kubelet[1923]: E0221 08:56:18.047358 1923 remote_runtime.go:572] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: f94f7f392e736bd87536f0490e3bf34465601e46c8d0581bbbe87d7fd75543c2" containerID="f94f7f392e736bd87536f0490e3bf34465601e46c8d0581bbbe87d7fd75543c2" Feb 21 08:56:18 false-20220221084934-6550 kubelet[1923]: I0221 08:56:18.047421 1923 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:docker ID:f94f7f392e736bd87536f0490e3bf34465601e46c8d0581bbbe87d7fd75543c2} err="failed to get container status \"f94f7f392e736bd87536f0490e3bf34465601e46c8d0581bbbe87d7fd75543c2\": rpc error: code = Unknown desc = Error: No such container: f94f7f392e736bd87536f0490e3bf34465601e46c8d0581bbbe87d7fd75543c2" Feb 21 08:56:18 false-20220221084934-6550 kubelet[1923]: I0221 08:56:18.050827 1923 reconciler.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ca2a7a8-2903-47ca-bcf3-097175f8bc79-config-volume\") on node \"false-20220221084934-6550\" DevicePath \"\"" Feb 21 08:56:18 false-20220221084934-6550 kubelet[1923]: I0221 08:56:18.050880 1923 reconciler.go:300] "Volume detached for volume \"kube-api-access-x2hhn\" (UniqueName: \"kubernetes.io/projected/2ca2a7a8-2903-47ca-bcf3-097175f8bc79-kube-api-access-x2hhn\") on node \"false-20220221084934-6550\" DevicePath \"\"" Feb 21 08:56:20 false-20220221084934-6550 kubelet[1923]: I0221 08:56:20.026953 1923 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=2ca2a7a8-2903-47ca-bcf3-097175f8bc79 path="/var/lib/kubelet/pods/2ca2a7a8-2903-47ca-bcf3-097175f8bc79/volumes" Feb 21 08:56:21 false-20220221084934-6550 kubelet[1923]: I0221 08:56:21.069881 1923 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/netcat-668db85669-gl7hj through plugin: invalid network status for" * * ==> storage-provisioner [5ea2efd75138] <== * I0221 08:56:13.910678 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... I0221 08:56:13.919371 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service! I0221 08:56:13.919441 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I0221 08:56:13.928725 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath I0221 08:56:13.928915 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_false-20220221084934-6550_d7189e91-1926-46aa-822b-8ac81b49033a! I0221 08:56:13.929221 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"609f23d9-58d6-4551-87eb-7b3e8a7082a4", APIVersion:"v1", ResourceVersion:"482", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' false-20220221084934-6550_d7189e91-1926-46aa-822b-8ac81b49033a became leader I0221 08:56:14.029182 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_false-20220221084934-6550_d7189e91-1926-46aa-822b-8ac81b49033a! -- /stdout -- helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p false-20220221084934-6550 -n false-20220221084934-6550 helpers_test.go:262: (dbg) Run: kubectl --context false-20220221084934-6550 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running helpers_test.go:271: non-running pods: helpers_test.go:273: ======> post-mortem[TestNetworkPlugins/group/false]: describe non-running pods <====== helpers_test.go:276: (dbg) Run: kubectl --context false-20220221084934-6550 describe pod helpers_test.go:276: (dbg) Non-zero exit: kubectl --context false-20220221084934-6550 describe pod : exit status 1 (41.507944ms) ** stderr ** error: resource name may not be empty ** /stderr ** helpers_test.go:278: kubectl --context false-20220221084934-6550 describe pod : exit status 1 helpers_test.go:176: Cleaning up "false-20220221084934-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p false-20220221084934-6550 helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p false-20220221084934-6550: (2.910147752s) === CONT TestNetworkPlugins/group/kindnet === RUN TestNetworkPlugins/group/kindnet/Start net_test.go:99: (dbg) Run: out/minikube-linux-amd64 start -p kindnet-20220221084934-6550 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker --container-runtime=docker === CONT TestNetworkPlugins/group/auto/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.126372066s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default E0221 09:02:54.642100 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.150214828s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.172290442s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** === CONT TestNetworkPlugins/group/custom-weave/Start net_test.go:99: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p custom-weave-20220221084934-6550 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker --container-runtime=docker: exit status 105 (8m39.119069821s) -- stdout -- * [custom-weave-20220221084934-6550] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64) - MINIKUBE_LOCATION=13641 - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube - MINIKUBE_BIN=out/minikube-linux-amd64 * Using the docker driver based on user configuration - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities * Starting control plane node custom-weave-20220221084934-6550 in cluster custom-weave-20220221084934-6550 * Pulling base image ... * Creating docker container (CPUs=2, Memory=2048MB) ... * Preparing Kubernetes v1.23.4 on Docker 20.10.12 ... - kubelet.housekeeping-interval=5m - Generating certificates and keys ... - Booting up control plane ... - Configuring RBAC rules ... * Configuring testdata/weavenet.yaml (Container Networking Interface) ... * Verifying Kubernetes components... - Using image gcr.io/k8s-minikube/storage-provisioner:v5 * Enabled addons: storage-provisioner, default-storageclass -- /stdout -- ** stderr ** I0221 08:54:47.458219 227869 out.go:297] Setting OutFile to fd 1 ... I0221 08:54:47.458326 227869 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 08:54:47.458338 227869 out.go:310] Setting ErrFile to fd 2... I0221 08:54:47.458344 227869 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 08:54:47.458503 227869 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/bin I0221 08:54:47.458917 227869 out.go:304] Setting JSON to false I0221 08:54:47.461070 227869 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2242,"bootTime":1645431446,"procs":806,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"} I0221 08:54:47.461183 227869 start.go:122] virtualization: kvm guest I0221 08:54:47.464031 227869 out.go:176] * [custom-weave-20220221084934-6550] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64) I0221 08:54:47.464153 227869 notify.go:193] Checking for updates... I0221 08:54:47.465465 227869 out.go:176] - MINIKUBE_LOCATION=13641 I0221 08:54:47.466737 227869 out.go:176] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true I0221 08:54:47.468108 227869 out.go:176] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 08:54:47.469317 227869 out.go:176] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube I0221 08:54:47.471589 227869 out.go:176] - MINIKUBE_BIN=out/minikube-linux-amd64 I0221 08:54:47.472040 227869 config.go:176] Loaded profile config "auto-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 08:54:47.472126 227869 config.go:176] Loaded profile config "calico-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 08:54:47.472199 227869 config.go:176] Loaded profile config "cilium-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 08:54:47.472247 227869 driver.go:344] Setting default libvirt URI to qemu:///system I0221 08:54:47.517461 227869 docker.go:132] docker version: linux-20.10.12 I0221 08:54:47.517586 227869 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 08:54:47.620138 227869 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2022-02-21 08:54:47.551657257 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 08:54:47.620271 227869 docker.go:237] overlay module found I0221 08:54:47.622372 227869 out.go:176] * Using the docker driver based on user configuration I0221 08:54:47.622397 227869 start.go:281] selected driver: docker I0221 08:54:47.622412 227869 start.go:798] validating driver "docker" against I0221 08:54:47.622433 227869 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} W0221 08:54:47.622515 227869 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. W0221 08:54:47.622540 227869 out.go:241] ! Your cgroup does not allow setting memory. ! Your cgroup does not allow setting memory. I0221 08:54:47.623978 227869 out.go:176] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities I0221 08:54:47.624791 227869 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 08:54:47.725034 227869 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2022-02-21 08:54:47.66170668 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 08:54:47.725164 227869 start_flags.go:288] no existing cluster config was found, will generate one from the flags I0221 08:54:47.725316 227869 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m I0221 08:54:47.725345 227869 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] I0221 08:54:47.725369 227869 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml" I0221 08:54:47.725389 227869 start_flags.go:297] Found "testdata/weavenet.yaml" CNI - setting NetworkPlugin=cni I0221 08:54:47.725399 227869 start_flags.go:302] config: {Name:custom-weave-20220221084934-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:custom-weave-20220221084934-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 08:54:47.727724 227869 out.go:176] * Starting control plane node custom-weave-20220221084934-6550 in cluster custom-weave-20220221084934-6550 I0221 08:54:47.727767 227869 cache.go:120] Beginning downloading kic base image for docker with docker I0221 08:54:47.729212 227869 out.go:176] * Pulling base image ... I0221 08:54:47.729243 227869 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 08:54:47.729280 227869 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4 I0221 08:54:47.729295 227869 cache.go:57] Caching tarball of preloaded images I0221 08:54:47.729343 227869 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon I0221 08:54:47.729540 227869 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0221 08:54:47.729557 227869 cache.go:60] Finished verifying existence of preloaded tar for v1.23.4 on docker I0221 08:54:47.729678 227869 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/config.json ... I0221 08:54:47.729700 227869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/config.json: {Name:mka893c0a5ff8738d3209de71a273b5ed5f8c7fb Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:54:47.776587 227869 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon, skipping pull I0221 08:54:47.776615 227869 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 exists in daemon, skipping load I0221 08:54:47.776635 227869 cache.go:208] Successfully downloaded all kic artifacts I0221 08:54:47.776674 227869 start.go:313] acquiring machines lock for custom-weave-20220221084934-6550: {Name:mk4ea336349dcf18d26ade5ee9a9024978187ca3 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 08:54:47.776813 227869 start.go:317] acquired machines lock for "custom-weave-20220221084934-6550" in 118.503µs I0221 08:54:47.776843 227869 start.go:89] Provisioning new machine with config: &{Name:custom-weave-20220221084934-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:custom-weave-20220221084934-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true} I0221 08:54:47.776919 227869 start.go:126] createHost starting for "" (driver="docker") I0221 08:54:47.779541 227869 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ... I0221 08:54:47.779787 227869 start.go:160] libmachine.API.Create for "custom-weave-20220221084934-6550" (driver="docker") I0221 08:54:47.779820 227869 client.go:168] LocalClient.Create starting I0221 08:54:47.779884 227869 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem I0221 08:54:47.779933 227869 main.go:130] libmachine: Decoding PEM data... I0221 08:54:47.779958 227869 main.go:130] libmachine: Parsing certificate... I0221 08:54:47.780028 227869 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem I0221 08:54:47.780052 227869 main.go:130] libmachine: Decoding PEM data... I0221 08:54:47.780078 227869 main.go:130] libmachine: Parsing certificate... I0221 08:54:47.780404 227869 cli_runner.go:133] Run: docker network inspect custom-weave-20220221084934-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0221 08:54:47.812283 227869 cli_runner.go:180] docker network inspect custom-weave-20220221084934-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0221 08:54:47.812354 227869 network_create.go:254] running [docker network inspect custom-weave-20220221084934-6550] to gather additional debugging logs... I0221 08:54:47.812371 227869 cli_runner.go:133] Run: docker network inspect custom-weave-20220221084934-6550 W0221 08:54:47.846261 227869 cli_runner.go:180] docker network inspect custom-weave-20220221084934-6550 returned with exit code 1 I0221 08:54:47.846317 227869 network_create.go:257] error running [docker network inspect custom-weave-20220221084934-6550]: docker network inspect custom-weave-20220221084934-6550: exit status 1 stdout: [] stderr: Error: No such network: custom-weave-20220221084934-6550 I0221 08:54:47.846350 227869 network_create.go:259] output of [docker network inspect custom-weave-20220221084934-6550]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: custom-weave-20220221084934-6550 ** /stderr ** I0221 08:54:47.846437 227869 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 08:54:47.880149 227869 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-8af72e223855 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:58:a5:dd:c8}} I0221 08:54:47.880989 227869 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc0006d4200] misses:0} I0221 08:54:47.881044 227869 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}} I0221 08:54:47.881059 227869 network_create.go:106] attempt to create docker network custom-weave-20220221084934-6550 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ... I0221 08:54:47.881116 227869 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220221084934-6550 I0221 08:54:47.951115 227869 network_create.go:90] docker network custom-weave-20220221084934-6550 192.168.58.0/24 created I0221 08:54:47.951148 227869 kic.go:106] calculated static IP "192.168.58.2" for the "custom-weave-20220221084934-6550" container I0221 08:54:47.951220 227869 cli_runner.go:133] Run: docker ps -a --format {{.Names}} I0221 08:54:47.991401 227869 cli_runner.go:133] Run: docker volume create custom-weave-20220221084934-6550 --label name.minikube.sigs.k8s.io=custom-weave-20220221084934-6550 --label created_by.minikube.sigs.k8s.io=true I0221 08:54:48.025554 227869 oci.go:102] Successfully created a docker volume custom-weave-20220221084934-6550 I0221 08:54:48.025643 227869 cli_runner.go:133] Run: docker run --rm --name custom-weave-20220221084934-6550-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220221084934-6550 --entrypoint /usr/bin/test -v custom-weave-20220221084934-6550:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -d /var/lib I0221 08:54:48.595681 227869 oci.go:106] Successfully prepared a docker volume custom-weave-20220221084934-6550 I0221 08:54:48.595760 227869 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 08:54:48.595785 227869 kic.go:179] Starting extracting preloaded images to volume ... I0221 08:54:48.595864 227869 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20220221084934-6550:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir I0221 08:54:54.606684 227869 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20220221084934-6550:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir: (6.010752765s) I0221 08:54:54.606731 227869 kic.go:188] duration metric: took 6.010943 seconds to extract preloaded images to volume W0221 08:54:54.606773 227869 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted. W0221 08:54:54.606787 227869 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. I0221 08:54:54.606827 227869 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'" I0221 08:54:54.713053 227869 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20220221084934-6550 --name custom-weave-20220221084934-6550 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220221084934-6550 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20220221084934-6550 --network custom-weave-20220221084934-6550 --ip 192.168.58.2 --volume custom-weave-20220221084934-6550:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 I0221 08:54:55.197249 227869 cli_runner.go:133] Run: docker container inspect custom-weave-20220221084934-6550 --format={{.State.Running}} I0221 08:54:55.251551 227869 cli_runner.go:133] Run: docker container inspect custom-weave-20220221084934-6550 --format={{.State.Status}} I0221 08:54:55.285366 227869 cli_runner.go:133] Run: docker exec custom-weave-20220221084934-6550 stat /var/lib/dpkg/alternatives/iptables I0221 08:54:55.364656 227869 oci.go:281] the created container "custom-weave-20220221084934-6550" has a running status. I0221 08:54:55.364693 227869 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/custom-weave-20220221084934-6550/id_rsa... I0221 08:54:55.460289 227869 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/custom-weave-20220221084934-6550/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0221 08:54:55.569379 227869 cli_runner.go:133] Run: docker container inspect custom-weave-20220221084934-6550 --format={{.State.Status}} I0221 08:54:55.607358 227869 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0221 08:54:55.607386 227869 kic_runner.go:114] Args: [docker exec --privileged custom-weave-20220221084934-6550 chown docker:docker /home/docker/.ssh/authorized_keys] I0221 08:54:55.707944 227869 cli_runner.go:133] Run: docker container inspect custom-weave-20220221084934-6550 --format={{.State.Status}} I0221 08:54:55.746584 227869 machine.go:88] provisioning docker machine ... I0221 08:54:55.746625 227869 ubuntu.go:169] provisioning hostname "custom-weave-20220221084934-6550" I0221 08:54:55.746679 227869 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220221084934-6550 I0221 08:54:55.782136 227869 main.go:130] libmachine: Using SSH client type: native I0221 08:54:55.782378 227869 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49369 } I0221 08:54:55.782408 227869 main.go:130] libmachine: About to run SSH command: sudo hostname custom-weave-20220221084934-6550 && echo "custom-weave-20220221084934-6550" | sudo tee /etc/hostname I0221 08:54:55.920475 227869 main.go:130] libmachine: SSH cmd err, output: : custom-weave-20220221084934-6550 I0221 08:54:55.920553 227869 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220221084934-6550 I0221 08:54:55.975664 227869 main.go:130] libmachine: Using SSH client type: native I0221 08:54:55.975866 227869 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49369 } I0221 08:54:55.975900 227869 main.go:130] libmachine: About to run SSH command: if ! grep -xq '.*\scustom-weave-20220221084934-6550' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-weave-20220221084934-6550/g' /etc/hosts; else echo '127.0.1.1 custom-weave-20220221084934-6550' | sudo tee -a /etc/hosts; fi fi I0221 08:54:56.102934 227869 main.go:130] libmachine: SSH cmd err, output: : I0221 08:54:56.102974 227869 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube} I0221 08:54:56.103020 227869 ubuntu.go:177] setting up certificates I0221 08:54:56.103036 227869 provision.go:83] configureAuth start I0221 08:54:56.103092 227869 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220221084934-6550 I0221 08:54:56.140749 227869 provision.go:138] copyHostCerts I0221 08:54:56.140814 227869 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem, removing ... I0221 08:54:56.140828 227869 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem I0221 08:54:56.140916 227869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem (1082 bytes) I0221 08:54:56.141002 227869 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem, removing ... I0221 08:54:56.141016 227869 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem I0221 08:54:56.141053 227869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem (1123 bytes) I0221 08:54:56.141122 227869 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem, removing ... I0221 08:54:56.141135 227869 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem I0221 08:54:56.141163 227869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem (1675 bytes) I0221 08:54:56.141225 227869 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem org=jenkins.custom-weave-20220221084934-6550 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube custom-weave-20220221084934-6550] I0221 08:54:56.326607 227869 provision.go:172] copyRemoteCerts I0221 08:54:56.326675 227869 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0221 08:54:56.326718 227869 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220221084934-6550 I0221 08:54:56.363092 227869 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/custom-weave-20220221084934-6550/id_rsa Username:docker} I0221 08:54:56.452714 227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes) I0221 08:54:56.472983 227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes) I0221 08:54:56.494894 227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0221 08:54:56.515723 227869 provision.go:86] duration metric: configureAuth took 412.669796ms I0221 08:54:56.515755 227869 ubuntu.go:193] setting minikube options for container-runtime I0221 08:54:56.515964 227869 config.go:176] Loaded profile config "custom-weave-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 08:54:56.516026 227869 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220221084934-6550 I0221 08:54:56.553857 227869 main.go:130] libmachine: Using SSH client type: native I0221 08:54:56.554015 227869 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49369 } I0221 08:54:56.554037 227869 main.go:130] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0221 08:54:56.675412 227869 main.go:130] libmachine: SSH cmd err, output: : overlay I0221 08:54:56.675444 227869 ubuntu.go:71] root file system type: overlay I0221 08:54:56.675646 227869 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0221 08:54:56.675703 227869 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220221084934-6550 I0221 08:54:56.714231 227869 main.go:130] libmachine: Using SSH client type: native I0221 08:54:56.714406 227869 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49369 } I0221 08:54:56.714509 227869 main.go:130] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %s "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0221 08:54:56.855829 227869 main.go:130] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0221 08:54:56.855929 227869 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220221084934-6550 I0221 08:54:56.893976 227869 main.go:130] libmachine: Using SSH client type: native I0221 08:54:56.894175 227869 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49369 } I0221 08:54:56.894198 227869 main.go:130] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0221 08:54:57.579128 227869 main.go:130] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2021-12-13 11:43:42.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2022-02-21 08:54:56.850898043 +0000 @@ -1,30 +1,32 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com +BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target -Requires=docker.socket containerd.service +Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always - -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 +Restart=on-failure -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +34,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0221 08:54:57.579162 227869 machine.go:91] provisioned docker machine in 1.832554133s I0221 08:54:57.579173 227869 client.go:171] LocalClient.Create took 9.799347142s I0221 08:54:57.579189 227869 start.go:168] duration metric: libmachine.API.Create for "custom-weave-20220221084934-6550" took 9.79940181s I0221 08:54:57.579201 227869 start.go:267] post-start starting for "custom-weave-20220221084934-6550" (driver="docker") I0221 08:54:57.579207 227869 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0221 08:54:57.579305 227869 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0221 08:54:57.579351 227869 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220221084934-6550 I0221 08:54:57.613063 227869 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/custom-weave-20220221084934-6550/id_rsa Username:docker} I0221 08:54:57.703066 227869 ssh_runner.go:195] Run: cat /etc/os-release I0221 08:54:57.705959 227869 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0221 08:54:57.705980 227869 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0221 08:54:57.705991 227869 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0221 08:54:57.705996 227869 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0221 08:54:57.706004 227869 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/addons for local assets ... I0221 08:54:57.706050 227869 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files for local assets ... I0221 08:54:57.706110 227869 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem -> 65502.pem in /etc/ssl/certs I0221 08:54:57.706179 227869 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs I0221 08:54:57.713029 227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /etc/ssl/certs/65502.pem (1708 bytes) I0221 08:54:57.731016 227869 start.go:270] post-start completed in 151.786403ms I0221 08:54:57.731352 227869 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220221084934-6550 I0221 08:54:57.764434 227869 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/config.json ... I0221 08:54:57.764715 227869 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0221 08:54:57.764768 227869 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220221084934-6550 I0221 08:54:57.796823 227869 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/custom-weave-20220221084934-6550/id_rsa Username:docker} I0221 08:54:57.883538 227869 start.go:129] duration metric: createHost completed in 10.106607266s I0221 08:54:57.883571 227869 start.go:80] releasing machines lock for "custom-weave-20220221084934-6550", held for 10.106740513s I0221 08:54:57.883662 227869 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220221084934-6550 I0221 08:54:57.916447 227869 ssh_runner.go:195] Run: systemctl --version I0221 08:54:57.916504 227869 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220221084934-6550 I0221 08:54:57.916539 227869 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/ I0221 08:54:57.916595 227869 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220221084934-6550 I0221 08:54:57.952282 227869 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/custom-weave-20220221084934-6550/id_rsa Username:docker} I0221 08:54:57.953012 227869 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/custom-weave-20220221084934-6550/id_rsa Username:docker} I0221 08:54:58.182655 227869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0221 08:54:58.192269 227869 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 08:54:58.201710 227869 cruntime.go:272] skipping containerd shutdown because we are bound to it I0221 08:54:58.201772 227869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0221 08:54:58.217490 227869 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0221 08:54:58.236241 227869 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0221 08:54:58.328534 227869 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0221 08:54:58.405690 227869 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 08:54:58.418618 227869 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0221 08:54:58.507435 227869 ssh_runner.go:195] Run: sudo systemctl start docker I0221 08:54:58.517435 227869 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 08:54:58.555565 227869 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 08:54:58.596881 227869 out.go:203] * Preparing Kubernetes v1.23.4 on Docker 20.10.12 ... I0221 08:54:58.596957 227869 cli_runner.go:133] Run: docker network inspect custom-weave-20220221084934-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 08:54:58.628733 227869 ssh_runner.go:195] Run: grep 192.168.58.1 host.minikube.internal$ /etc/hosts I0221 08:54:58.632087 227869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 08:54:58.643526 227869 out.go:176] - kubelet.housekeeping-interval=5m I0221 08:54:58.643605 227869 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 08:54:58.643653 227869 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0221 08:54:58.675389 227869 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.4 k8s.gcr.io/kube-proxy:v1.23.4 k8s.gcr.io/kube-scheduler:v1.23.4 k8s.gcr.io/kube-controller-manager:v1.23.4 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0221 08:54:58.675418 227869 docker.go:537] Images already preloaded, skipping extraction I0221 08:54:58.675488 227869 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0221 08:54:58.708483 227869 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.4 k8s.gcr.io/kube-proxy:v1.23.4 k8s.gcr.io/kube-scheduler:v1.23.4 k8s.gcr.io/kube-controller-manager:v1.23.4 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0221 08:54:58.708509 227869 cache_images.go:84] Images are preloaded, skipping loading I0221 08:54:58.708561 227869 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0221 08:54:58.791115 227869 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml" I0221 08:54:58.791158 227869 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0221 08:54:58.791174 227869 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-weave-20220221084934-6550 NodeName:custom-weave-20220221084934-6550 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0221 08:54:58.791341 227869 kubeadm.go:162] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.58.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "custom-weave-20220221084934-6550" kubeletExtraArgs: node-ip: 192.168.58.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.58.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.23.4 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%" nodefs.inodesFree: "0%" imagefs.available: "0%" failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0221 08:54:58.791445 227869 kubeadm.go:936] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.23.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=custom-weave-20220221084934-6550 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 [Install] config: {KubernetesVersion:v1.23.4 ClusterName:custom-weave-20220221084934-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} I0221 08:54:58.791498 227869 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.4 I0221 08:54:58.798800 227869 binaries.go:44] Found k8s binaries, skipping transfer I0221 08:54:58.799251 227869 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0221 08:54:58.807147 227869 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (406 bytes) I0221 08:54:58.820224 227869 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0221 08:54:58.833088 227869 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2054 bytes) I0221 08:54:58.846338 227869 ssh_runner.go:195] Run: grep 192.168.58.2 control-plane.minikube.internal$ /etc/hosts I0221 08:54:58.849240 227869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 08:54:58.858694 227869 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550 for IP: 192.168.58.2 I0221 08:54:58.858805 227869 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key I0221 08:54:58.858840 227869 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key I0221 08:54:58.858885 227869 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/client.key I0221 08:54:58.858898 227869 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/client.crt with IP's: [] I0221 08:54:59.108630 227869 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/client.crt ... I0221 08:54:59.108671 227869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/client.crt: {Name:mk10a31cfb47f6cf3f7da307f7bac4d74ffcf445 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:54:59.108910 227869 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/client.key ... I0221 08:54:59.108933 227869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/client.key: {Name:mke61651e1bae31960788075de046902ba3a384d Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:54:59.109066 227869 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/apiserver.key.cee25041 I0221 08:54:59.109088 227869 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1] I0221 08:54:59.505500 227869 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/apiserver.crt.cee25041 ... I0221 08:54:59.505538 227869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/apiserver.crt.cee25041: {Name:mkbc006409aa5d703ce8a53644ff64d9eca16a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:54:59.505785 227869 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/apiserver.key.cee25041 ... I0221 08:54:59.505805 227869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/apiserver.key.cee25041: {Name:mkad1017a3ef8cd68460d4665ab5aa6e577c7d2d Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:54:59.505895 227869 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/apiserver.crt I0221 08:54:59.505949 227869 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/apiserver.key I0221 08:54:59.506011 227869 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/proxy-client.key I0221 08:54:59.506028 227869 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/proxy-client.crt with IP's: [] I0221 08:54:59.595538 227869 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/proxy-client.crt ... I0221 08:54:59.595578 227869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/proxy-client.crt: {Name:mk42c1b2b0663ef91b5f6118e4e09fad281d7665 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:54:59.595806 227869 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/proxy-client.key ... I0221 08:54:59.595823 227869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/proxy-client.key: {Name:mk2f72a2c489551e30437a2aea9d0cb930af0fc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:54:59.595993 227869 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem (1338 bytes) W0221 08:54:59.596029 227869 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550_empty.pem, impossibly tiny 0 bytes I0221 08:54:59.596043 227869 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem (1679 bytes) I0221 08:54:59.596096 227869 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem (1082 bytes) I0221 08:54:59.596127 227869 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem (1123 bytes) I0221 08:54:59.596151 227869 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem (1675 bytes) I0221 08:54:59.596191 227869 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem (1708 bytes) I0221 08:54:59.597036 227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0221 08:54:59.616277 227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes) I0221 08:54:59.637516 227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0221 08:54:59.655614 227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I0221 08:54:59.673516 227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0221 08:54:59.691562 227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0221 08:54:59.709384 227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0221 08:54:59.731673 227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0221 08:54:59.749383 227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0221 08:54:59.768558 227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem --> /usr/share/ca-certificates/6550.pem (1338 bytes) I0221 08:54:59.785931 227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /usr/share/ca-certificates/65502.pem (1708 bytes) I0221 08:54:59.803428 227869 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0221 08:54:59.816515 227869 ssh_runner.go:195] Run: openssl version I0221 08:54:59.821519 227869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6550.pem && ln -fs /usr/share/ca-certificates/6550.pem /etc/ssl/certs/6550.pem" I0221 08:54:59.829127 227869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6550.pem I0221 08:54:59.832411 227869 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Feb 21 08:30 /usr/share/ca-certificates/6550.pem I0221 08:54:59.832456 227869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6550.pem I0221 08:54:59.837155 227869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6550.pem /etc/ssl/certs/51391683.0" I0221 08:54:59.844619 227869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65502.pem && ln -fs /usr/share/ca-certificates/65502.pem /etc/ssl/certs/65502.pem" I0221 08:54:59.852034 227869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65502.pem I0221 08:54:59.855268 227869 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Feb 21 08:30 /usr/share/ca-certificates/65502.pem I0221 08:54:59.855304 227869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65502.pem I0221 08:54:59.860269 227869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65502.pem /etc/ssl/certs/3ec20f2e.0" I0221 08:54:59.867781 227869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0221 08:54:59.875277 227869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0221 08:54:59.878320 227869 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb 21 08:26 /usr/share/ca-certificates/minikubeCA.pem I0221 08:54:59.878371 227869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0221 08:54:59.883480 227869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0221 08:54:59.891452 227869 kubeadm.go:391] StartCluster: {Name:custom-weave-20220221084934-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:custom-weave-20220221084934-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 08:54:59.891586 227869 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0221 08:54:59.924799 227869 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0221 08:54:59.932091 227869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0221 08:54:59.939371 227869 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver I0221 08:54:59.939430 227869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0221 08:54:59.947372 227869 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0221 08:54:59.947423 227869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0221 08:55:00.482705 227869 out.go:203] - Generating certificates and keys ... I0221 08:55:03.685435 227869 out.go:203] - Booting up control plane ... I0221 08:55:10.727547 227869 out.go:203] - Configuring RBAC rules ... I0221 08:55:11.151901 227869 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml" I0221 08:55:11.154044 227869 out.go:176] * Configuring testdata/weavenet.yaml (Container Networking Interface) ... I0221 08:55:11.154111 227869 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.4/kubectl ... I0221 08:55:11.154161 227869 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml I0221 08:55:11.207872 227869 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/tmp/minikube/cni.yaml': No such file or directory I0221 08:55:11.207908 227869 ssh_runner.go:362] scp testdata/weavenet.yaml --> /var/tmp/minikube/cni.yaml (10948 bytes) I0221 08:55:11.231141 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml I0221 08:55:12.304984 227869 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.073803299s) I0221 08:55:12.305050 227869 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0221 08:55:12.305176 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:12.305176 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=9a4c39b14752448ee8b5da7f8bb397c2a16d9ea9 minikube.k8s.io/name=custom-weave-20220221084934-6550 minikube.k8s.io/updated_at=2022_02_21T08_55_12_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:12.403260 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:12.403289 227869 ops.go:34] apiserver oom_adj: -16 I0221 08:55:12.963301 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:13.462762 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:13.963185 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:14.463531 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:14.962764 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:15.463397 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:15.963546 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:16.462752 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:16.963400 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:17.463637 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:17.963168 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:18.463128 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:18.962774 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:19.463663 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:19.962811 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:20.463551 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:20.963554 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:21.463298 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:21.963457 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:22.463549 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:22.963434 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:23.463347 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:23.962843 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:24.019474 227869 kubeadm.go:1020] duration metric: took 11.714385799s to wait for elevateKubeSystemPrivileges. I0221 08:55:24.019508 227869 kubeadm.go:393] StartCluster complete in 24.128063045s I0221 08:55:24.019531 227869 settings.go:142] acquiring lock: {Name:mk4400923ef35d7d80e21aa000bc7683aef0fb09 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:55:24.019619 227869 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 08:55:24.020875 227869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig: {Name:mkbaf7b3f4ffbe8a9d57707d423380523fa909e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:} W0221 08:55:24.035745 227869 kapi.go:233] failed rescaling deployment, will retry: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again I0221 08:55:25.038511 227869 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "custom-weave-20220221084934-6550" rescaled to 1 I0221 08:55:25.038569 227869 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true} I0221 08:55:25.041496 227869 out.go:176] * Verifying Kubernetes components... I0221 08:55:25.038653 227869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0221 08:55:25.041566 227869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0221 08:55:25.038656 227869 addons.go:415] enableAddons start: toEnable=map[], additional=[] I0221 08:55:25.041635 227869 addons.go:65] Setting storage-provisioner=true in profile "custom-weave-20220221084934-6550" I0221 08:55:25.039253 227869 config.go:176] Loaded profile config "custom-weave-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 08:55:25.041657 227869 addons.go:153] Setting addon storage-provisioner=true in "custom-weave-20220221084934-6550" W0221 08:55:25.041668 227869 addons.go:165] addon storage-provisioner should already be in state true I0221 08:55:25.041708 227869 host.go:66] Checking if "custom-weave-20220221084934-6550" exists ... I0221 08:55:25.041706 227869 addons.go:65] Setting default-storageclass=true in profile "custom-weave-20220221084934-6550" I0221 08:55:25.041747 227869 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-weave-20220221084934-6550" I0221 08:55:25.042057 227869 cli_runner.go:133] Run: docker container inspect custom-weave-20220221084934-6550 --format={{.State.Status}} I0221 08:55:25.042294 227869 cli_runner.go:133] Run: docker container inspect custom-weave-20220221084934-6550 --format={{.State.Status}} I0221 08:55:25.057925 227869 node_ready.go:35] waiting up to 5m0s for node "custom-weave-20220221084934-6550" to be "Ready" ... I0221 08:55:25.062489 227869 node_ready.go:49] node "custom-weave-20220221084934-6550" has status "Ready":"True" I0221 08:55:25.062517 227869 node_ready.go:38] duration metric: took 4.554004ms waiting for node "custom-weave-20220221084934-6550" to be "Ready" ... I0221 08:55:25.062529 227869 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 08:55:25.075842 227869 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-fw5hd" in "kube-system" namespace to be "Ready" ... I0221 08:55:25.091233 227869 out.go:176] - Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0221 08:55:25.091370 227869 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml I0221 08:55:25.091386 227869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0221 08:55:25.091440 227869 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220221084934-6550 I0221 08:55:25.103387 227869 addons.go:153] Setting addon default-storageclass=true in "custom-weave-20220221084934-6550" W0221 08:55:25.103416 227869 addons.go:165] addon default-storageclass should already be in state true I0221 08:55:25.103439 227869 host.go:66] Checking if "custom-weave-20220221084934-6550" exists ... I0221 08:55:25.103789 227869 cli_runner.go:133] Run: docker container inspect custom-weave-20220221084934-6550 --format={{.State.Status}} I0221 08:55:25.136464 227869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.58.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0221 08:55:25.138654 227869 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/custom-weave-20220221084934-6550/id_rsa Username:docker} I0221 08:55:25.154985 227869 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml I0221 08:55:25.155049 227869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0221 08:55:25.155102 227869 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220221084934-6550 I0221 08:55:25.188302 227869 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/custom-weave-20220221084934-6550/id_rsa Username:docker} I0221 08:55:25.323710 227869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0221 08:55:25.509102 227869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0221 08:55:25.628703 227869 start.go:777] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS I0221 08:55:26.031236 227869 out.go:176] * Enabled addons: storage-provisioner, default-storageclass I0221 08:55:26.031270 227869 addons.go:417] enableAddons completed in 992.622832ms I0221 08:55:27.093638 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:29.095472 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:31.106114 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:33.593883 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:35.603309 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:38.094303 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:40.594209 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:43.094975 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:45.594422 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:48.094138 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:50.094339 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:52.593954 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:55.094041 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:57.094158 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:59.594464 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:01.594499 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:03.595044 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:06.096228 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:08.594008 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:10.594274 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:12.594837 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:15.094474 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:17.095174 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:19.595203 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:22.094022 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:24.094532 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:26.594351 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:29.094290 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:31.595545 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:34.094168 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:36.094581 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:38.593443 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:40.593849 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:42.594084 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:44.594768 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:47.093943 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:49.593364 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:51.593995 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:53.594291 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:55.594982 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:57.595281 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:00.095968 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:02.593875 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:05.095863 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:07.593598 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:09.595599 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:11.600301 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:14.093831 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:16.094542 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:18.094583 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:20.594516 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:23.094746 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:25.094898 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:27.096067 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:29.594682 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:31.595072 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:34.093783 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:36.095122 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:38.593566 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:40.593916 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:42.594575 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:44.594678 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:46.594775 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:49.093600 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:51.093716 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:53.594138 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:55.594195 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:58.094464 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:00.594174 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:03.094260 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:05.097983 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:07.594946 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:10.095115 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:12.593715 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:14.594295 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:17.097192 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:19.593497 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:21.593740 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:23.594026 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:26.094324 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:28.594956 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:31.094580 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:33.593910 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:35.595299 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:38.093960 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:40.094102 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:42.095073 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:44.593597 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:46.594499 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:48.594616 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:50.594840 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:53.094539 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:55.094604 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:57.593439 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:59.593598 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:01.594070 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:04.094375 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:06.593739 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:08.594057 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:10.594906 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:12.595167 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:15.094611 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:17.594243 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:20.094535 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:22.095445 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:24.593641 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:25.099642 227869 pod_ready.go:81] duration metric: took 4m0.023714023s waiting for pod "coredns-64897985d-fw5hd" in "kube-system" namespace to be "Ready" ... E0221 08:59:25.099664 227869 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition I0221 08:59:25.099673 227869 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-kn627" in "kube-system" namespace to be "Ready" ... I0221 08:59:25.101152 227869 pod_ready.go:97] error getting pod "coredns-64897985d-kn627" in "kube-system" namespace (skipping!): pods "coredns-64897985d-kn627" not found I0221 08:59:25.101173 227869 pod_ready.go:81] duration metric: took 1.494584ms waiting for pod "coredns-64897985d-kn627" in "kube-system" namespace to be "Ready" ... E0221 08:59:25.101182 227869 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-64897985d-kn627" in "kube-system" namespace (skipping!): pods "coredns-64897985d-kn627" not found I0221 08:59:25.101190 227869 pod_ready.go:78] waiting up to 5m0s for pod "etcd-custom-weave-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:25.105178 227869 pod_ready.go:92] pod "etcd-custom-weave-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 08:59:25.105196 227869 pod_ready.go:81] duration metric: took 3.99997ms waiting for pod "etcd-custom-weave-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:25.105204 227869 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-custom-weave-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:25.109930 227869 pod_ready.go:92] pod "kube-apiserver-custom-weave-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 08:59:25.109949 227869 pod_ready.go:81] duration metric: took 4.739462ms waiting for pod "kube-apiserver-custom-weave-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:25.109958 227869 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-custom-weave-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:25.292675 227869 pod_ready.go:92] pod "kube-controller-manager-custom-weave-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 08:59:25.292711 227869 pod_ready.go:81] duration metric: took 182.734028ms waiting for pod "kube-controller-manager-custom-weave-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:25.292723 227869 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-q4stn" in "kube-system" namespace to be "Ready" ... I0221 08:59:25.691815 227869 pod_ready.go:92] pod "kube-proxy-q4stn" in "kube-system" namespace has status "Ready":"True" I0221 08:59:25.691839 227869 pod_ready.go:81] duration metric: took 399.108423ms waiting for pod "kube-proxy-q4stn" in "kube-system" namespace to be "Ready" ... I0221 08:59:25.691848 227869 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-custom-weave-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:26.092539 227869 pod_ready.go:92] pod "kube-scheduler-custom-weave-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 08:59:26.092566 227869 pod_ready.go:81] duration metric: took 400.710732ms waiting for pod "kube-scheduler-custom-weave-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:26.092579 227869 pod_ready.go:78] waiting up to 5m0s for pod "weave-net-dgkzh" in "kube-system" namespace to be "Ready" ... I0221 08:59:28.498990 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:30.998871 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:33.499218 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:35.998834 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:38.498252 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:40.499308 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:42.998921 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:45.498291 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:47.498914 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:49.998220 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:51.999087 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:53.999129 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:56.497881 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:58.498148 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:00.999242 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:03.498525 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:05.999154 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:08.498881 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:10.998464 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:12.998682 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:14.999363 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:17.498767 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:19.499481 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:21.998971 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:24.499960 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:26.999269 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:29.499198 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:31.998892 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:33.999959 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:36.498439 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:38.998551 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:40.998664 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:42.999010 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:45.498414 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:47.498620 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:49.998601 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:51.999470 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:54.499043 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:56.499562 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:58.998197 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:00.998372 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:02.999674 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:05.499244 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:07.998930 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:10.499101 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:12.499436 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:14.998244 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:16.998957 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:19.499569 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:21.503811 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:23.998532 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:26.001410 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:28.497652 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:30.497882 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:32.498505 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:34.499389 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:36.998781 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:39.497987 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:41.999075 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:43.999131 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:45.999453 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:48.498612 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:50.502349 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:53.000328 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:55.498350 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:57.498897 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:59.998589 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:02.498112 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:04.499166 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:06.499366 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:08.998138 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:10.998798 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:12.998867 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:14.999708 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:17.499134 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:19.998038 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:21.999415 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:24.503262 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:26.998872 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:28.999023 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:31.498312 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:33.498493 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:35.999270 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:38.499111 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:40.998862 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:43.499053 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:45.499484 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:47.499802 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:49.999065 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:51.999352 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:54.503567 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:56.998735 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:58.999291 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:00.999500 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:03.001366 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:05.498670 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:07.499251 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:09.998225 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:11.999084 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:14.499690 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:16.998485 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:19.498295 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:21.498521 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:23.499957 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:25.998718 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:26.503352 227869 pod_ready.go:81] duration metric: took 4m0.410759109s waiting for pod "weave-net-dgkzh" in "kube-system" namespace to be "Ready" ... E0221 09:03:26.503375 227869 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition I0221 09:03:26.503381 227869 pod_ready.go:38] duration metric: took 8m1.440836229s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 09:03:26.503404 227869 api_server.go:51] waiting for apiserver process to appear ... I0221 09:03:26.505928 227869 out.go:176] W0221 09:03:26.506107 227869 out.go:241] X Exiting due to K8S_APISERVER_MISSING: wait 5m0s for node: wait for apiserver proc: apiserver process never appeared X Exiting due to K8S_APISERVER_MISSING: wait 5m0s for node: wait for apiserver proc: apiserver process never appeared W0221 09:03:26.506213 227869 out.go:241] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled W0221 09:03:26.506230 227869 out.go:241] * Related issues: * Related issues: W0221 09:03:26.506275 227869 out.go:241] - https://github.com/kubernetes/minikube/issues/4536 - https://github.com/kubernetes/minikube/issues/4536 W0221 09:03:26.506318 227869 out.go:241] - https://github.com/kubernetes/minikube/issues/6014 - https://github.com/kubernetes/minikube/issues/6014 I0221 09:03:26.507855 227869 out.go:176] ** /stderr ** net_test.go:101: failed start: exit status 105 === CONT TestNetworkPlugins/group/custom-weave net_test.go:154: skipping remaining tests for weave, as results can be unpredictable panic.go:642: *** TestNetworkPlugins/group/custom-weave FAILED at 2022-02-21 09:03:26.546071833 +0000 UTC m=+2299.308391427 helpers_test.go:223: -----------------------post-mortem-------------------------------- helpers_test.go:231: ======> post-mortem[TestNetworkPlugins/group/custom-weave]: docker inspect <====== helpers_test.go:232: (dbg) Run: docker inspect custom-weave-20220221084934-6550 helpers_test.go:236: (dbg) docker inspect custom-weave-20220221084934-6550: -- stdout -- [ { "Id": "59cfea5eeecfb7dfe576375d21e85fc78af4f71182d8a5debd64cf9fff24e0fa", "Created": "2022-02-21T08:54:54.750983019Z", "Path": "/usr/local/bin/entrypoint", "Args": [ "/sbin/init" ], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 229111, "ExitCode": 0, "Error": "", "StartedAt": "2022-02-21T08:54:55.188353195Z", "FinishedAt": "0001-01-01T00:00:00Z" }, "Image": "sha256:1312ccd2422d964b2df363d606d0c016d6acbc1ddf0211c26a74717f2897dc43", "ResolvConfPath": "/var/lib/docker/containers/59cfea5eeecfb7dfe576375d21e85fc78af4f71182d8a5debd64cf9fff24e0fa/resolv.conf", "HostnamePath": "/var/lib/docker/containers/59cfea5eeecfb7dfe576375d21e85fc78af4f71182d8a5debd64cf9fff24e0fa/hostname", "HostsPath": "/var/lib/docker/containers/59cfea5eeecfb7dfe576375d21e85fc78af4f71182d8a5debd64cf9fff24e0fa/hosts", "LogPath": "/var/lib/docker/containers/59cfea5eeecfb7dfe576375d21e85fc78af4f71182d8a5debd64cf9fff24e0fa/59cfea5eeecfb7dfe576375d21e85fc78af4f71182d8a5debd64cf9fff24e0fa-json.log", "Name": "/custom-weave-20220221084934-6550", "RestartCount": 0, "Driver": "overlay2", "Platform": "linux", "MountLabel": "", "ProcessLabel": "", "AppArmorProfile": "unconfined", "ExecIDs": null, "HostConfig": { "Binds": [ "/lib/modules:/lib/modules:ro", "custom-weave-20220221084934-6550:/var" ], "ContainerIDFile": "", "LogConfig": { "Type": "json-file", "Config": {} }, "NetworkMode": "custom-weave-20220221084934-6550", "PortBindings": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ] }, "RestartPolicy": { "Name": "no", "MaximumRetryCount": 0 }, "AutoRemove": false, "VolumeDriver": "", "VolumesFrom": null, "CapAdd": null, "CapDrop": null, "CgroupnsMode": "host", "Dns": [], "DnsOptions": [], "DnsSearch": [], "ExtraHosts": null, "GroupAdd": null, "IpcMode": "private", "Cgroup": "", "Links": null, "OomScoreAdj": 0, "PidMode": "", "Privileged": true, "PublishAllPorts": false, "ReadonlyRootfs": false, "SecurityOpt": [ "seccomp=unconfined", "apparmor=unconfined", "label=disable" ], "Tmpfs": { "/run": "", "/tmp": "" }, "UTSMode": "", "UsernsMode": "", "ShmSize": 67108864, "Runtime": "runc", "ConsoleSize": [ 0, 0 ], "Isolation": "", "CpuShares": 0, "Memory": 0, "NanoCpus": 2000000000, "CgroupParent": "", "BlkioWeight": 0, "BlkioWeightDevice": [], "BlkioDeviceReadBps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteIOps": null, "CpuPeriod": 0, "CpuQuota": 0, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": [ { "PathOnHost": "/dev/fuse", "PathInContainer": "/dev/fuse", "CgroupPermissions": "rwm" } ], "DeviceCgroupRules": null, "DeviceRequests": null, "KernelMemory": 0, "KernelMemoryTCP": 0, "MemoryReservation": 0, "MemorySwap": 0, "MemorySwappiness": null, "OomKillDisable": false, "PidsLimit": null, "Ulimits": null, "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0, "MaskedPaths": null, "ReadonlyPaths": null }, "GraphDriver": { "Data": { "LowerDir": "/var/lib/docker/overlay2/54b9b5451bf28759f69abe623a2ca44ff5d4c0423a88af11292a67226381fffb-init/diff:/var/lib/docker/overlay2/a149cc5f42de5038ba3e5f21bd82a6356aa9456aec11a3690e569db72ca8eaf5/diff:/var/lib/docker/overlay2/fce4ef34d92ff253276822af091ec16578b924486e0e565c7f34b479bdbe1606/diff:/var/lib/docker/overlay2/a38e10ed288ad73a8ce637c6219d5b13318bb72210ce4fb6291dd57d238df611/diff:/var/lib/docker/overlay2/93c18ddce49a4eb2d344b0579d7f58df999d53bb329042f78a5b3f40296f83cd/diff:/var/lib/docker/overlay2/d9ee14262b31fff87924cf913d835b02182e6a8dc3783ecb5f67dbd29faa079c/diff:/var/lib/docker/overlay2/8a0b39fc5ebcb9761ad433102244050fab339d3e027a5a30a7e01d121fe362e4/diff:/var/lib/docker/overlay2/7a6b41deef37b6400fac0c94871f73040d4cae157de2e929b7a0bf96f0e353aa/diff:/var/lib/docker/overlay2/49d913ee587718efcf521a65de0109df5c2c42644f75690beed97a437d623b66/diff:/var/lib/docker/overlay2/89e8fa960313dfb6ece4007576bf5d89e2b0ea68a2bde9de5527647a7df6ff0f/diff:/var/lib/docker/overlay2/0d2632e6ee2f4198f3c7c0d37c31ee03d33864e0131ae85dd3001adecfae53db/diff:/var/lib/docker/overlay2/b28a3b2bbe2e231d00863aaaf6be28d2e5a76197f2be7934f18720a71f2435a8/diff:/var/lib/docker/overlay2/6f88e83ff5bb26bf909248d8df5eacf4d01f72523927fd7433a925c525fbd274/diff:/var/lib/docker/overlay2/c2547d18d2639fa842b0977952fa69cffd7b076e8db357ef0ec62fae9676619b/diff:/var/lib/docker/overlay2/34465ba37bb8eef87906c38cb891a738c54a16dbf47d27fef95cee96f0b62c4f/diff:/var/lib/docker/overlay2/50d3383518bb8e6a01e1cfcae621b8c2238dc54ae94d63606d145633045bc6cd/diff:/var/lib/docker/overlay2/83561f107aed2bf9cb840ff66aceb882d0e39bcced79ef27ce67c17e063ba536/diff:/var/lib/docker/overlay2/bfb7c0c91e0a7649286abe3a6b02df4585a931ffa3a6b1e4846aa1b98eb0d997/diff:/var/lib/docker/overlay2/80589044da151f3d3eed7824d236bc8a19697182abca8610eb34f1c3cae53bf0/diff:/var/lib/docker/overlay2/5a336e956e467ad50830aa4b9ed5b65cb3ef08e6d49a4900acd0440ca3c03c3a/diff:/var/lib/docker/overlay2/f7b3446c0dca80a2053183e360de8f99e3a31e87ad633fcc23f14b2045643596/diff:/var/lib/docker/overlay2/03688a16272f13f172ed8baa181df2a1702e08edbc9c5a4a767265adb2c00b79/diff:/var/lib/docker/overlay2/c5445afaabab58221dbffedde261b3f9948116ccba1344b6f3dd4fdfaf5c98cd/diff:/var/lib/docker/overlay2/704464fd20f23f93d0ad3c3c58103f01fc8a0d76d70bc191f718273e7412bb42/diff:/var/lib/docker/overlay2/a344f9ca1bf1b674ebf560c2a0498913ed90c836e3b4f5b1968022f7f7e54055/diff:/var/lib/docker/overlay2/929f053e6a327a7203e8d8c281f827d02ad922fdabbdfcc5453003daba88cdf4/diff:/var/lib/docker/overlay2/d6a160f0ffe94e5abf99d928a638f6a6220b24c7ba643a438c7557146d240f70/diff:/var/lib/docker/overlay2/c6b58e187a2df10fd55e19e6ad38c20f5ebf5e62617a59b6ce4afe0fc712c0f6/diff:/var/lib/docker/overlay2/fef43deeff878af42cc1f31a1f265b66e642aa166cec02eba5a0c82fb6d70872/diff:/var/lib/docker/overlay2/0d7bbf19182056a05efca89485df54da5fba9903000517fc9e7c2072aa7ba62e/diff:/var/lib/docker/overlay2/5840748964e9ce3890a2fa4fc2e3fb0635426f944b42b3128808a7aca816bf48/diff:/var/lib/docker/overlay2/a94126962e9f346960e3f570b96e0a6e987d255fcba30bf3fd76b227e66275c6/diff:/var/lib/docker/overlay2/211ba88e1557a352838b4896dff8efac70330c1424cff237613c4186994ec037/diff:/var/lib/docker/overlay2/c04720d23411d6693832990e2c79bc529ade609401eaeee8d159f5a6d74d986e/diff:/var/lib/docker/overlay2/70bfe328c89d3a3902a4ba591874fe936425b208ec8e39c2e25d60f21b212300/diff:/var/lib/docker/overlay2/7dcce8bec6dd9561387116c9f724b9101e94e15c0e19c7880ce2b4e96b2921a6/diff:/var/lib/docker/overlay2/81463813805e91be42d9f71b5c63823a80abaaa42d8638034eaddcd66c148310/diff:/var/lib/docker/overlay2/c7dbb1344cf7167b07d58d49902e0db9335bc92a71a324c15dadc14c0aa67c45/diff:/var/lib/docker/overlay2/909df0d6b92b92bcf6e376f0e33d120c53cd24ad0b997d31b5a84f3d1ab89769/diff:/var/lib/docker/overlay2/eacfeaee6ed4aa14e847adad2c6d03439c53944654cc2779cc5c80d239ab323c/diff:/var/lib/docker/overlay2/48c58492169f9043495b11b47439f4dcdeb168525e5bb3a8c130c1ddd5ec882d/diff:/var/lib/docker/overlay2/4520a69f91e766b7694473c2107cc0644394d65da900259eb5933779accdc86d/diff:/var/lib/docker/overlay2/68aafdd300c4b4ebf94976a352959e75fa170653a69e13da38650a447cd5cb2f/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394cfc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff", "MergedDir": "/var/lib/docker/overlay2/54b9b5451bf28759f69abe623a2ca44ff5d4c0423a88af11292a67226381fffb/merged", "UpperDir": "/var/lib/docker/overlay2/54b9b5451bf28759f69abe623a2ca44ff5d4c0423a88af11292a67226381fffb/diff", "WorkDir": "/var/lib/docker/overlay2/54b9b5451bf28759f69abe623a2ca44ff5d4c0423a88af11292a67226381fffb/work" }, "Name": "overlay2" }, "Mounts": [ { "Type": "bind", "Source": "/lib/modules", "Destination": "/lib/modules", "Mode": "ro", "RW": false, "Propagation": "rprivate" }, { "Type": "volume", "Name": "custom-weave-20220221084934-6550", "Source": "/var/lib/docker/volumes/custom-weave-20220221084934-6550/_data", "Destination": "/var", "Driver": "local", "Mode": "z", "RW": true, "Propagation": "" } ], "Config": { "Hostname": "custom-weave-20220221084934-6550", "Domainname": "", "User": "root", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "ExposedPorts": { "22/tcp": {}, "2376/tcp": {}, "32443/tcp": {}, "5000/tcp": {}, "8443/tcp": {} }, "Tty": true, "OpenStdin": false, "StdinOnce": false, "Env": [ "container=docker", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" ], "Cmd": null, "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2", "Volumes": null, "WorkingDir": "", "Entrypoint": [ "/usr/local/bin/entrypoint", "/sbin/init" ], "OnBuild": null, "Labels": { "created_by.minikube.sigs.k8s.io": "true", "mode.minikube.sigs.k8s.io": "custom-weave-20220221084934-6550", "name.minikube.sigs.k8s.io": "custom-weave-20220221084934-6550", "role.minikube.sigs.k8s.io": "" }, "StopSignal": "SIGRTMIN+3" }, "NetworkSettings": { "Bridge": "", "SandboxID": "2e50e0d2e9bb9cbe23d616c3eb71bd84e258ca3dfe1782abff0ee5c5702e7d74", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49369" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49368" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49365" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49367" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49366" } ] }, "SandboxKey": "/var/run/docker/netns/2e50e0d2e9bb", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null, "EndpointID": "", "Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "MacAddress": "", "Networks": { "custom-weave-20220221084934-6550": { "IPAMConfig": { "IPv4Address": "192.168.58.2" }, "Links": null, "Aliases": [ "59cfea5eeecf", "custom-weave-20220221084934-6550" ], "NetworkID": "8f04c0f799cdbf343e84d425f1ca4388cf92aa7825dd26e2443bcb2e6ddf3e18", "EndpointID": "f47747a8e866677e75de509d5ebff9f8d325a45eae331c580281ffef64bb4293", "Gateway": "192.168.58.1", "IPAddress": "192.168.58.2", "IPPrefixLen": 24, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:c0:a8:3a:02", "DriverOpts": null } } } } ] -- /stdout -- helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p custom-weave-20220221084934-6550 -n custom-weave-20220221084934-6550 helpers_test.go:245: <<< TestNetworkPlugins/group/custom-weave FAILED: start of post-mortem logs <<< helpers_test.go:246: ======> post-mortem[TestNetworkPlugins/group/custom-weave]: minikube logs <====== helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 -p custom-weave-20220221084934-6550 logs -n 25 === CONT TestNetworkPlugins/group/auto/DNS net_test.go:163: (dbg) Run: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default === CONT TestNetworkPlugins/group/custom-weave helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p custom-weave-20220221084934-6550 logs -n 25: (1.30489947s) helpers_test.go:253: TestNetworkPlugins/group/custom-weave logs: -- stdout -- * * ==> Audit <== * |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| | start | -p | NoKubernetes-20220221085208-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:06 UTC | Mon, 21 Feb 2022 08:53:13 UTC | | | NoKubernetes-20220221085208-6550 | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | delete | -p | NoKubernetes-20220221085208-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:13 UTC | Mon, 21 Feb 2022 08:53:15 UTC | | | NoKubernetes-20220221085208-6550 | | | | | | | start | -p | kubernetes-upgrade-20220221085141-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:05 UTC | Mon, 21 Feb 2022 08:53:21 UTC | | | kubernetes-upgrade-20220221085141-6550 | | | | | | | | --memory=2200 | | | | | | | | --kubernetes-version=v1.23.5-rc.0 | | | | | | | | --alsologtostderr -v=1 --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | start | -p pause-20220221085158-6550 | pause-20220221085158-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:52:46 UTC | Mon, 21 Feb 2022 08:53:25 UTC | | | --alsologtostderr | | | | | | | | -v=1 --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | pause | -p pause-20220221085158-6550 | pause-20220221085158-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:25 UTC | Mon, 21 Feb 2022 08:53:26 UTC | | | --alsologtostderr -v=5 | | | | | | | unpause | -p pause-20220221085158-6550 | pause-20220221085158-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:27 UTC | Mon, 21 Feb 2022 08:53:28 UTC | | | --alsologtostderr -v=5 | | | | | | | pause | -p pause-20220221085158-6550 | pause-20220221085158-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:28 UTC | Mon, 21 Feb 2022 08:53:29 UTC | | | --alsologtostderr -v=5 | | | | | | | delete | -p | kubernetes-upgrade-20220221085141-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:22 UTC | Mon, 21 Feb 2022 08:53:29 UTC | | | kubernetes-upgrade-20220221085141-6550 | | | | | | | delete | -p pause-20220221085158-6550 | pause-20220221085158-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:29 UTC | Mon, 21 Feb 2022 08:53:32 UTC | | | --alsologtostderr -v=5 | | | | | | | profile | list --output json | minikube | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:32 UTC | Mon, 21 Feb 2022 08:53:32 UTC | | delete | -p pause-20220221085158-6550 | pause-20220221085158-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:33 UTC | Mon, 21 Feb 2022 08:53:33 UTC | | start | -p | stopped-upgrade-20220221085315-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:00 UTC | Mon, 21 Feb 2022 08:54:26 UTC | | | stopped-upgrade-20220221085315-6550 | | | | | | | | --memory=2200 --alsologtostderr | | | | | | | | -v=1 --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | logs | -p | stopped-upgrade-20220221085315-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:27 UTC | Mon, 21 Feb 2022 08:54:29 UTC | | | stopped-upgrade-20220221085315-6550 | | | | | | | delete | -p | stopped-upgrade-20220221085315-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:29 UTC | Mon, 21 Feb 2022 08:54:31 UTC | | | stopped-upgrade-20220221085315-6550 | | | | | | | start | -p | cert-expiration-20220221085105-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:40 UTC | Mon, 21 Feb 2022 08:54:44 UTC | | | cert-expiration-20220221085105-6550 | | | | | | | | --memory=2048 | | | | | | | | --cert-expiration=8760h | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | delete | -p | cert-expiration-20220221085105-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:44 UTC | Mon, 21 Feb 2022 08:54:47 UTC | | | cert-expiration-20220221085105-6550 | | | | | | | start | -p cilium-20220221084934-6550 | cilium-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:33 UTC | Mon, 21 Feb 2022 08:55:10 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=cilium --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p cilium-20220221084934-6550 | cilium-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:55:15 UTC | Mon, 21 Feb 2022 08:55:16 UTC | | | pgrep -a kubelet | | | | | | | delete | -p cilium-20220221084934-6550 | cilium-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:55:29 UTC | Mon, 21 Feb 2022 08:55:33 UTC | | start | -p false-20220221084934-6550 | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:55:33 UTC | Mon, 21 Feb 2022 08:56:15 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=false --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p false-20220221084934-6550 | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:56:15 UTC | Mon, 21 Feb 2022 08:56:16 UTC | | | pgrep -a kubelet | | | | | | | start | -p auto-20220221084933-6550 | auto-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:29 UTC | Mon, 21 Feb 2022 09:01:45 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p auto-20220221084933-6550 | auto-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:01:45 UTC | Mon, 21 Feb 2022 09:01:45 UTC | | | pgrep -a kubelet | | | | | | | -p | false-20220221084934-6550 logs | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:02:41 UTC | Mon, 21 Feb 2022 09:02:42 UTC | | | -n 25 | | | | | | | delete | -p false-20220221084934-6550 | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:02:43 UTC | Mon, 21 Feb 2022 09:02:46 UTC | |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| * * ==> Last Start <== * Log file created at: 2022/02/21 09:02:46 Running on machine: ubuntu-20-agent-5 Binary: Built with gc go1.17.7 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0221 09:02:46.418914 421870 out.go:297] Setting OutFile to fd 1 ... I0221 09:02:46.419151 421870 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 09:02:46.419167 421870 out.go:310] Setting ErrFile to fd 2... I0221 09:02:46.419173 421870 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 09:02:46.419315 421870 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/bin I0221 09:02:46.419744 421870 out.go:304] Setting JSON to false I0221 09:02:46.422139 421870 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2721,"bootTime":1645431446,"procs":586,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"} I0221 09:02:46.422249 421870 start.go:122] virtualization: kvm guest I0221 09:02:46.425907 421870 out.go:176] * [kindnet-20220221084934-6550] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64) I0221 09:02:46.427552 421870 out.go:176] - MINIKUBE_LOCATION=13641 I0221 09:02:46.426088 421870 notify.go:193] Checking for updates... I0221 09:02:46.429105 421870 out.go:176] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true I0221 09:02:46.430539 421870 out.go:176] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 09:02:46.431957 421870 out.go:176] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube I0221 09:02:46.433542 421870 out.go:176] - MINIKUBE_BIN=out/minikube-linux-amd64 I0221 09:02:46.434195 421870 config.go:176] Loaded profile config "auto-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:02:46.434347 421870 config.go:176] Loaded profile config "calico-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:02:46.434466 421870 config.go:176] Loaded profile config "custom-weave-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:02:46.434580 421870 driver.go:344] Setting default libvirt URI to qemu:///system I0221 09:02:46.485736 421870 docker.go:132] docker version: linux-20.10.12 I0221 09:02:46.485848 421870 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 09:02:46.590394 421870 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-02-21 09:02:46.526492405 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 09:02:46.590501 421870 docker.go:237] overlay module found I0221 09:02:46.592885 421870 out.go:176] * Using the docker driver based on user configuration I0221 09:02:46.592913 421870 start.go:281] selected driver: docker I0221 09:02:46.592920 421870 start.go:798] validating driver "docker" against I0221 09:02:46.592941 421870 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} W0221 09:02:46.593002 421870 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. W0221 09:02:46.593034 421870 out.go:241] ! Your cgroup does not allow setting memory. I0221 09:02:46.594359 421870 out.go:176] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities I0221 09:02:46.595176 421870 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 09:02:46.689048 421870 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-02-21 09:02:46.626989751 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 09:02:46.689173 421870 start_flags.go:288] no existing cluster config was found, will generate one from the flags I0221 09:02:46.689337 421870 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m I0221 09:02:46.689374 421870 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] I0221 09:02:46.689398 421870 cni.go:93] Creating CNI manager for "kindnet" I0221 09:02:46.689411 421870 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk" I0221 09:02:46.689420 421870 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk" I0221 09:02:46.689425 421870 start_flags.go:297] Found "CNI" CNI - setting NetworkPlugin=cni I0221 09:02:46.689436 421870 start_flags.go:302] config: {Name:kindnet-20220221084934-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:kindnet-20220221084934-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 09:02:46.691655 421870 out.go:176] * Starting control plane node kindnet-20220221084934-6550 in cluster kindnet-20220221084934-6550 I0221 09:02:46.691689 421870 cache.go:120] Beginning downloading kic base image for docker with docker I0221 09:02:46.693178 421870 out.go:176] * Pulling base image ... I0221 09:02:46.693202 421870 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 09:02:46.693231 421870 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4 I0221 09:02:46.693248 421870 cache.go:57] Caching tarball of preloaded images I0221 09:02:46.693298 421870 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon I0221 09:02:46.693510 421870 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0221 09:02:46.693531 421870 cache.go:60] Finished verifying existence of preloaded tar for v1.23.4 on docker I0221 09:02:46.693663 421870 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/config.json ... I0221 09:02:46.693691 421870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/config.json: {Name:mk5e9f6fabb2503a70e5e3f2016d5064b170a784 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:02:46.739404 421870 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon, skipping pull I0221 09:02:46.739434 421870 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 exists in daemon, skipping load I0221 09:02:46.739450 421870 cache.go:208] Successfully downloaded all kic artifacts I0221 09:02:46.739489 421870 start.go:313] acquiring machines lock for kindnet-20220221084934-6550: {Name:mkae4e55a073d1017bc2176c7236155c21c25592 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:02:46.739642 421870 start.go:317] acquired machines lock for "kindnet-20220221084934-6550" in 125.251µs I0221 09:02:46.739672 421870 start.go:89] Provisioning new machine with config: &{Name:kindnet-20220221084934-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:kindnet-20220221084934-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true} I0221 09:02:46.739741 421870 start.go:126] createHost starting for "" (driver="docker") I0221 09:02:43.499053 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:45.499484 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:46.742792 421870 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ... I0221 09:02:46.742990 421870 start.go:160] libmachine.API.Create for "kindnet-20220221084934-6550" (driver="docker") I0221 09:02:46.743041 421870 client.go:168] LocalClient.Create starting I0221 09:02:46.743109 421870 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem I0221 09:02:46.743139 421870 main.go:130] libmachine: Decoding PEM data... I0221 09:02:46.743155 421870 main.go:130] libmachine: Parsing certificate... I0221 09:02:46.743210 421870 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem I0221 09:02:46.743229 421870 main.go:130] libmachine: Decoding PEM data... I0221 09:02:46.743240 421870 main.go:130] libmachine: Parsing certificate... I0221 09:02:46.743611 421870 cli_runner.go:133] Run: docker network inspect kindnet-20220221084934-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0221 09:02:46.776910 421870 cli_runner.go:180] docker network inspect kindnet-20220221084934-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0221 09:02:46.777036 421870 network_create.go:254] running [docker network inspect kindnet-20220221084934-6550] to gather additional debugging logs... I0221 09:02:46.777068 421870 cli_runner.go:133] Run: docker network inspect kindnet-20220221084934-6550 W0221 09:02:46.810101 421870 cli_runner.go:180] docker network inspect kindnet-20220221084934-6550 returned with exit code 1 I0221 09:02:46.810129 421870 network_create.go:257] error running [docker network inspect kindnet-20220221084934-6550]: docker network inspect kindnet-20220221084934-6550: exit status 1 stdout: [] stderr: Error: No such network: kindnet-20220221084934-6550 I0221 09:02:46.810149 421870 network_create.go:259] output of [docker network inspect kindnet-20220221084934-6550]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: kindnet-20220221084934-6550 ** /stderr ** I0221 09:02:46.810193 421870 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 09:02:46.850325 421870 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0001142e0] misses:0} I0221 09:02:46.850390 421870 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}} I0221 09:02:46.850413 421870 network_create.go:106] attempt to create docker network kindnet-20220221084934-6550 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ... I0221 09:02:46.850466 421870 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220221084934-6550 I0221 09:02:46.941862 421870 network_create.go:90] docker network kindnet-20220221084934-6550 192.168.49.0/24 created I0221 09:02:46.941911 421870 kic.go:106] calculated static IP "192.168.49.2" for the "kindnet-20220221084934-6550" container I0221 09:02:46.941988 421870 cli_runner.go:133] Run: docker ps -a --format {{.Names}} I0221 09:02:46.986866 421870 cli_runner.go:133] Run: docker volume create kindnet-20220221084934-6550 --label name.minikube.sigs.k8s.io=kindnet-20220221084934-6550 --label created_by.minikube.sigs.k8s.io=true I0221 09:02:47.033525 421870 oci.go:102] Successfully created a docker volume kindnet-20220221084934-6550 I0221 09:02:47.033640 421870 cli_runner.go:133] Run: docker run --rm --name kindnet-20220221084934-6550-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220221084934-6550 --entrypoint /usr/bin/test -v kindnet-20220221084934-6550:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -d /var/lib I0221 09:02:47.653123 421870 oci.go:106] Successfully prepared a docker volume kindnet-20220221084934-6550 I0221 09:02:47.653182 421870 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 09:02:47.653206 421870 kic.go:179] Starting extracting preloaded images to volume ... I0221 09:02:47.653278 421870 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20220221084934-6550:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir I0221 09:02:47.499802 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:49.999065 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:51.999352 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:53.357753 421870 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20220221084934-6550:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir: (5.704415115s) I0221 09:02:53.357793 421870 kic.go:188] duration metric: took 5.704584 seconds to extract preloaded images to volume W0221 09:02:53.357839 421870 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted. W0221 09:02:53.357848 421870 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. I0221 09:02:53.357899 421870 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'" I0221 09:02:53.494107 421870 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-20220221084934-6550 --name kindnet-20220221084934-6550 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220221084934-6550 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-20220221084934-6550 --network kindnet-20220221084934-6550 --ip 192.168.49.2 --volume kindnet-20220221084934-6550:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 I0221 09:02:53.938142 421870 cli_runner.go:133] Run: docker container inspect kindnet-20220221084934-6550 --format={{.State.Running}} I0221 09:02:53.977761 421870 cli_runner.go:133] Run: docker container inspect kindnet-20220221084934-6550 --format={{.State.Status}} I0221 09:02:54.014430 421870 cli_runner.go:133] Run: docker exec kindnet-20220221084934-6550 stat /var/lib/dpkg/alternatives/iptables I0221 09:02:54.092969 421870 oci.go:281] the created container "kindnet-20220221084934-6550" has a running status. I0221 09:02:54.093005 421870 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/kindnet-20220221084934-6550/id_rsa... I0221 09:02:54.326177 421870 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/kindnet-20220221084934-6550/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0221 09:02:54.417769 421870 cli_runner.go:133] Run: docker container inspect kindnet-20220221084934-6550 --format={{.State.Status}} I0221 09:02:54.456669 421870 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0221 09:02:54.456690 421870 kic_runner.go:114] Args: [docker exec --privileged kindnet-20220221084934-6550 chown docker:docker /home/docker/.ssh/authorized_keys] I0221 09:02:54.581545 421870 cli_runner.go:133] Run: docker container inspect kindnet-20220221084934-6550 --format={{.State.Status}} I0221 09:02:54.633727 421870 machine.go:88] provisioning docker machine ... I0221 09:02:54.633785 421870 ubuntu.go:169] provisioning hostname "kindnet-20220221084934-6550" I0221 09:02:54.633845 421870 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220221084934-6550 I0221 09:02:54.674073 421870 main.go:130] libmachine: Using SSH client type: native I0221 09:02:54.674260 421870 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49384 } I0221 09:02:54.674279 421870 main.go:130] libmachine: About to run SSH command: sudo hostname kindnet-20220221084934-6550 && echo "kindnet-20220221084934-6550" | sudo tee /etc/hostname I0221 09:02:54.809726 421870 main.go:130] libmachine: SSH cmd err, output: : kindnet-20220221084934-6550 I0221 09:02:54.809848 421870 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220221084934-6550 I0221 09:02:54.851935 421870 main.go:130] libmachine: Using SSH client type: native I0221 09:02:54.852127 421870 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49384 } I0221 09:02:54.852159 421870 main.go:130] libmachine: About to run SSH command: if ! grep -xq '.*\skindnet-20220221084934-6550' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-20220221084934-6550/g' /etc/hosts; else echo '127.0.1.1 kindnet-20220221084934-6550' | sudo tee -a /etc/hosts; fi fi I0221 09:02:54.978885 421870 main.go:130] libmachine: SSH cmd err, output: : I0221 09:02:54.978913 421870 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube} I0221 09:02:54.978931 421870 ubuntu.go:177] setting up certificates I0221 09:02:54.978938 421870 provision.go:83] configureAuth start I0221 09:02:54.978987 421870 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220221084934-6550 I0221 09:02:55.016078 421870 provision.go:138] copyHostCerts I0221 09:02:55.016145 421870 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem, removing ... I0221 09:02:55.016160 421870 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem I0221 09:02:55.016225 421870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem (1123 bytes) I0221 09:02:55.016312 421870 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem, removing ... I0221 09:02:55.016325 421870 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem I0221 09:02:55.016355 421870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem (1675 bytes) I0221 09:02:55.016441 421870 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem, removing ... I0221 09:02:55.016454 421870 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem I0221 09:02:55.016482 421870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem (1082 bytes) I0221 09:02:55.016545 421870 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem org=jenkins.kindnet-20220221084934-6550 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube kindnet-20220221084934-6550] I0221 09:02:55.142876 421870 provision.go:172] copyRemoteCerts I0221 09:02:55.142927 421870 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0221 09:02:55.142956 421870 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220221084934-6550 I0221 09:02:55.180040 421870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/kindnet-20220221084934-6550/id_rsa Username:docker} I0221 09:02:55.271610 421870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes) I0221 09:02:55.290168 421870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes) I0221 09:02:55.309782 421870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I0221 09:02:55.329912 421870 provision.go:86] duration metric: configureAuth took 350.961994ms I0221 09:02:55.329942 421870 ubuntu.go:193] setting minikube options for container-runtime I0221 09:02:55.330124 421870 config.go:176] Loaded profile config "kindnet-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:02:55.330167 421870 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220221084934-6550 I0221 09:02:55.365504 421870 main.go:130] libmachine: Using SSH client type: native I0221 09:02:55.365645 421870 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49384 } I0221 09:02:55.365660 421870 main.go:130] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0221 09:02:55.491213 421870 main.go:130] libmachine: SSH cmd err, output: : overlay I0221 09:02:55.491235 421870 ubuntu.go:71] root file system type: overlay I0221 09:02:55.491415 421870 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0221 09:02:55.491488 421870 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220221084934-6550 I0221 09:02:55.526411 421870 main.go:130] libmachine: Using SSH client type: native I0221 09:02:55.526581 421870 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49384 } I0221 09:02:55.526680 421870 main.go:130] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0221 09:02:55.662369 421870 main.go:130] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0221 09:02:55.662441 421870 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220221084934-6550 I0221 09:02:55.699216 421870 main.go:130] libmachine: Using SSH client type: native I0221 09:02:55.699355 421870 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49384 } I0221 09:02:55.699374 421870 main.go:130] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0221 09:02:54.503567 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:56.998735 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:56.491686 421870 main.go:130] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2021-12-13 11:43:42.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2022-02-21 09:02:55.657011187 +0000 @@ -1,30 +1,32 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com +BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target -Requires=docker.socket containerd.service +Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always - -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 +Restart=on-failure -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +34,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0221 09:02:56.491800 421870 machine.go:91] provisioned docker machine in 1.858045045s I0221 09:02:56.491822 421870 client.go:171] LocalClient.Create took 9.74877581s I0221 09:02:56.491872 421870 start.go:168] duration metric: libmachine.API.Create for "kindnet-20220221084934-6550" took 9.748881649s I0221 09:02:56.491889 421870 start.go:267] post-start starting for "kindnet-20220221084934-6550" (driver="docker") I0221 09:02:56.491903 421870 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0221 09:02:56.492012 421870 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0221 09:02:56.492066 421870 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220221084934-6550 I0221 09:02:56.526465 421870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/kindnet-20220221084934-6550/id_rsa Username:docker} I0221 09:02:56.615129 421870 ssh_runner.go:195] Run: cat /etc/os-release I0221 09:02:56.617935 421870 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0221 09:02:56.617957 421870 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0221 09:02:56.617965 421870 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0221 09:02:56.617970 421870 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0221 09:02:56.617978 421870 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/addons for local assets ... I0221 09:02:56.618034 421870 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files for local assets ... I0221 09:02:56.618103 421870 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem -> 65502.pem in /etc/ssl/certs I0221 09:02:56.618171 421870 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs I0221 09:02:56.625275 421870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /etc/ssl/certs/65502.pem (1708 bytes) I0221 09:02:56.644498 421870 start.go:270] post-start completed in 152.588958ms I0221 09:02:56.644916 421870 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220221084934-6550 I0221 09:02:56.681293 421870 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/config.json ... I0221 09:02:56.681515 421870 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0221 09:02:56.681610 421870 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220221084934-6550 I0221 09:02:56.721379 421870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/kindnet-20220221084934-6550/id_rsa Username:docker} I0221 09:02:56.808966 421870 start.go:129] duration metric: createHost completed in 10.069211525s I0221 09:02:56.808998 421870 start.go:80] releasing machines lock for "kindnet-20220221084934-6550", held for 10.069338497s I0221 09:02:56.809095 421870 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220221084934-6550 I0221 09:02:56.850076 421870 ssh_runner.go:195] Run: systemctl --version I0221 09:02:56.850132 421870 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220221084934-6550 I0221 09:02:56.850167 421870 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/ I0221 09:02:56.850237 421870 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220221084934-6550 I0221 09:02:56.890827 421870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/kindnet-20220221084934-6550/id_rsa Username:docker} I0221 09:02:56.891336 421870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/kindnet-20220221084934-6550/id_rsa Username:docker} I0221 09:02:56.976243 421870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0221 09:02:57.132743 421870 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 09:02:57.148381 421870 cruntime.go:272] skipping containerd shutdown because we are bound to it I0221 09:02:57.148442 421870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0221 09:02:57.159381 421870 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0221 09:02:57.175410 421870 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0221 09:02:57.271305 421870 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0221 09:02:57.355294 421870 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 09:02:57.367402 421870 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0221 09:02:57.488213 421870 ssh_runner.go:195] Run: sudo systemctl start docker I0221 09:02:57.498236 421870 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 09:02:57.539420 421870 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 09:02:57.582488 421870 out.go:203] * Preparing Kubernetes v1.23.4 on Docker 20.10.12 ... I0221 09:02:57.582558 421870 cli_runner.go:133] Run: docker network inspect kindnet-20220221084934-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 09:02:57.617583 421870 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts I0221 09:02:57.621298 421870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 09:02:57.633091 421870 out.go:176] - kubelet.housekeeping-interval=5m I0221 09:02:57.634593 421870 out.go:176] - kubelet.cni-conf-dir=/etc/cni/net.mk I0221 09:02:57.634664 421870 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 09:02:57.634717 421870 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0221 09:02:57.669633 421870 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.4 k8s.gcr.io/kube-proxy:v1.23.4 k8s.gcr.io/kube-scheduler:v1.23.4 k8s.gcr.io/kube-controller-manager:v1.23.4 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0221 09:02:57.669660 421870 docker.go:537] Images already preloaded, skipping extraction I0221 09:02:57.669718 421870 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0221 09:02:57.702534 421870 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.4 k8s.gcr.io/kube-proxy:v1.23.4 k8s.gcr.io/kube-scheduler:v1.23.4 k8s.gcr.io/kube-controller-manager:v1.23.4 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0221 09:02:57.702557 421870 cache_images.go:84] Images are preloaded, skipping loading I0221 09:02:57.702614 421870 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0221 09:02:57.801443 421870 cni.go:93] Creating CNI manager for "kindnet" I0221 09:02:57.801477 421870 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0221 09:02:57.801495 421870 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-20220221084934-6550 NodeName:kindnet-20220221084934-6550 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0221 09:02:57.802095 421870 kubeadm.go:162] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "kindnet-20220221084934-6550" kubeletExtraArgs: node-ip: 192.168.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.23.4 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0221 09:02:57.802216 421870 kubeadm.go:936] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.23.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kindnet-20220221084934-6550 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 [Install] config: {KubernetesVersion:v1.23.4 ClusterName:kindnet-20220221084934-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} I0221 09:02:57.802277 421870 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.4 I0221 09:02:57.812194 421870 binaries.go:44] Found k8s binaries, skipping transfer I0221 09:02:57.812264 421870 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0221 09:02:57.820437 421870 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes) I0221 09:02:57.835878 421870 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0221 09:02:57.851132 421870 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2049 bytes) I0221 09:02:57.866717 421870 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0221 09:02:57.870186 421870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 09:02:57.880930 421870 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550 for IP: 192.168.49.2 I0221 09:02:57.881073 421870 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key I0221 09:02:57.881123 421870 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key I0221 09:02:57.881182 421870 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.key I0221 09:02:57.881201 421870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt with IP's: [] I0221 09:02:58.122192 421870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt ... I0221 09:02:58.122248 421870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: {Name:mkfcad536857e2df5f764473a6c4022c78e2cb6b Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:02:58.122520 421870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.key ... I0221 09:02:58.122555 421870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.key: {Name:mk0a0ed6833930623faa4187b4c5b9df5d813c22 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:02:58.122712 421870 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/apiserver.key.dd3b5fb2 I0221 09:02:58.122738 421870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1] I0221 09:02:58.294361 421870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/apiserver.crt.dd3b5fb2 ... I0221 09:02:58.294395 421870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/apiserver.crt.dd3b5fb2: {Name:mk57f05819422b53c694b7dbd0538167943b8123 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:02:58.294585 421870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/apiserver.key.dd3b5fb2 ... I0221 09:02:58.294598 421870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/apiserver.key.dd3b5fb2: {Name:mkace58def46b0ded866dbe122dff06b32df1c31 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:02:58.294688 421870 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/apiserver.crt I0221 09:02:58.294748 421870 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/apiserver.key I0221 09:02:58.294794 421870 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/proxy-client.key I0221 09:02:58.294807 421870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/proxy-client.crt with IP's: [] I0221 09:02:58.386352 421870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/proxy-client.crt ... I0221 09:02:58.386394 421870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/proxy-client.crt: {Name:mk1e5b13534fa30b464e2af4b13ee0434adbb152 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:02:58.386580 421870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/proxy-client.key ... I0221 09:02:58.386595 421870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/proxy-client.key: {Name:mkccae812985e46ee45b7ed63a4e8f01e4ef79bc Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:02:58.386808 421870 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem (1338 bytes) W0221 09:02:58.386847 421870 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550_empty.pem, impossibly tiny 0 bytes I0221 09:02:58.386860 421870 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem (1679 bytes) I0221 09:02:58.386879 421870 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem (1082 bytes) I0221 09:02:58.386904 421870 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem (1123 bytes) I0221 09:02:58.386930 421870 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem (1675 bytes) I0221 09:02:58.386967 421870 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem (1708 bytes) I0221 09:02:58.387842 421870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0221 09:02:58.406406 421870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0221 09:02:58.427877 421870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0221 09:02:58.450370 421870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I0221 09:02:58.471031 421870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0221 09:02:58.488588 421870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0221 09:02:58.510325 421870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0221 09:02:58.531870 421870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0221 09:02:58.552589 421870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /usr/share/ca-certificates/65502.pem (1708 bytes) I0221 09:02:58.572723 421870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0221 09:02:58.592243 421870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem --> /usr/share/ca-certificates/6550.pem (1338 bytes) I0221 09:02:58.612569 421870 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0221 09:02:58.627629 421870 ssh_runner.go:195] Run: openssl version I0221 09:02:58.632470 421870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65502.pem && ln -fs /usr/share/ca-certificates/65502.pem /etc/ssl/certs/65502.pem" I0221 09:02:58.640718 421870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65502.pem I0221 09:02:58.643932 421870 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Feb 21 08:30 /usr/share/ca-certificates/65502.pem I0221 09:02:58.643984 421870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65502.pem I0221 09:02:58.649626 421870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65502.pem /etc/ssl/certs/3ec20f2e.0" I0221 09:02:58.659813 421870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0221 09:02:58.668032 421870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0221 09:02:58.671395 421870 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb 21 08:26 /usr/share/ca-certificates/minikubeCA.pem I0221 09:02:58.671466 421870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0221 09:02:58.676558 421870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0221 09:02:58.686078 421870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6550.pem && ln -fs /usr/share/ca-certificates/6550.pem /etc/ssl/certs/6550.pem" I0221 09:02:58.695102 421870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6550.pem I0221 09:02:58.698227 421870 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Feb 21 08:30 /usr/share/ca-certificates/6550.pem I0221 09:02:58.698283 421870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6550.pem I0221 09:02:58.703261 421870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6550.pem /etc/ssl/certs/51391683.0" I0221 09:02:58.711034 421870 kubeadm.go:391] StartCluster: {Name:kindnet-20220221084934-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:kindnet-20220221084934-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 09:02:58.711184 421870 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0221 09:02:58.743141 421870 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0221 09:02:58.750761 421870 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0221 09:02:58.758046 421870 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver I0221 09:02:58.758093 421870 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0221 09:02:58.765348 421870 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0221 09:02:58.765385 421870 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0221 09:02:59.415593 421870 out.go:203] - Generating certificates and keys ... I0221 09:02:57.356331 223679 system_pods.go:86] 9 kube-system pods found I0221 09:02:57.356379 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 09:02:57.356394 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 09:02:57.356411 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:02:57.356420 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 09:02:57.356428 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 09:02:57.356435 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 09:02:57.356448 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 09:02:57.356454 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 09:02:57.356467 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:02:57.356486 223679 retry.go:31] will retry after 47.463338706s: missing components: kube-dns I0221 09:02:58.999291 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:00.999500 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:02.244326 421870 out.go:203] - Booting up control plane ... I0221 09:03:03.001366 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:05.498670 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:10.293083 421870 out.go:203] - Configuring RBAC rules ... I0221 09:03:10.709558 421870 cni.go:93] Creating CNI manager for "kindnet" I0221 09:03:10.711610 421870 out.go:176] * Configuring CNI (Container Networking Interface) ... I0221 09:03:10.711695 421870 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap I0221 09:03:10.716222 421870 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.4/kubectl ... I0221 09:03:10.716244 421870 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes) I0221 09:03:10.732451 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml I0221 09:03:07.499251 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:09.998225 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:11.999084 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:11.923993 421870 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.191495967s) I0221 09:03:11.924059 421870 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0221 09:03:11.924159 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:11.924167 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=9a4c39b14752448ee8b5da7f8bb397c2a16d9ea9 minikube.k8s.io/name=kindnet-20220221084934-6550 minikube.k8s.io/updated_at=2022_02_21T09_03_11_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:12.032677 421870 ops.go:34] apiserver oom_adj: -16 I0221 09:03:12.032780 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:12.605285 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:13.105004 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:13.605923 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:14.105476 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:14.605851 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:15.105728 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:15.605955 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:16.105032 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:14.499690 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:16.998485 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:16.605672 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:17.105080 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:17.605790 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:18.106006 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:18.605297 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:19.105722 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:19.605419 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:20.105021 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:20.605093 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:21.105621 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:19.498295 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:21.498521 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:21.605168 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:22.105314 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:22.605614 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:23.105172 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:23.605755 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:23.737549 421870 kubeadm.go:1020] duration metric: took 11.81345574s to wait for elevateKubeSystemPrivileges. I0221 09:03:23.737585 421870 kubeadm.go:393] StartCluster complete in 25.026601823s I0221 09:03:23.737605 421870 settings.go:142] acquiring lock: {Name:mk4400923ef35d7d80e21aa000bc7683aef0fb09 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:03:23.737698 421870 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 09:03:23.739843 421870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig: {Name:mkbaf7b3f4ffbe8a9d57707d423380523fa909e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:03:24.259930 421870 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kindnet-20220221084934-6550" rescaled to 1 I0221 09:03:24.260011 421870 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true} I0221 09:03:24.260042 421870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0221 09:03:24.262604 421870 out.go:176] * Verifying Kubernetes components... I0221 09:03:24.260238 421870 addons.go:415] enableAddons start: toEnable=map[], additional=[] I0221 09:03:24.262730 421870 addons.go:65] Setting storage-provisioner=true in profile "kindnet-20220221084934-6550" I0221 09:03:24.262765 421870 addons.go:153] Setting addon storage-provisioner=true in "kindnet-20220221084934-6550" W0221 09:03:24.262773 421870 addons.go:165] addon storage-provisioner should already be in state true I0221 09:03:24.260430 421870 config.go:176] Loaded profile config "kindnet-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:03:24.262809 421870 host.go:66] Checking if "kindnet-20220221084934-6550" exists ... I0221 09:03:24.262817 421870 addons.go:65] Setting default-storageclass=true in profile "kindnet-20220221084934-6550" I0221 09:03:24.262671 421870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0221 09:03:24.262845 421870 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-20220221084934-6550" I0221 09:03:24.263204 421870 cli_runner.go:133] Run: docker container inspect kindnet-20220221084934-6550 --format={{.State.Status}} I0221 09:03:24.263349 421870 cli_runner.go:133] Run: docker container inspect kindnet-20220221084934-6550 --format={{.State.Status}} I0221 09:03:24.307855 421870 out.go:176] - Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0221 09:03:24.307554 421870 addons.go:153] Setting addon default-storageclass=true in "kindnet-20220221084934-6550" W0221 09:03:24.307952 421870 addons.go:165] addon default-storageclass should already be in state true I0221 09:03:24.307975 421870 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml I0221 09:03:24.307981 421870 host.go:66] Checking if "kindnet-20220221084934-6550" exists ... I0221 09:03:24.307985 421870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0221 09:03:24.308036 421870 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220221084934-6550 I0221 09:03:24.308426 421870 cli_runner.go:133] Run: docker container inspect kindnet-20220221084934-6550 --format={{.State.Status}} I0221 09:03:24.346731 421870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0221 09:03:24.349502 421870 node_ready.go:35] waiting up to 5m0s for node "kindnet-20220221084934-6550" to be "Ready" ... I0221 09:03:24.369657 421870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/kindnet-20220221084934-6550/id_rsa Username:docker} I0221 09:03:24.369909 421870 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml I0221 09:03:24.369923 421870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0221 09:03:24.369985 421870 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220221084934-6550 I0221 09:03:24.417801 421870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/kindnet-20220221084934-6550/id_rsa Username:docker} I0221 09:03:24.513471 421870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0221 09:03:24.522931 421870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0221 09:03:24.602695 421870 start.go:777] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS I0221 09:03:24.810287 421870 out.go:176] * Enabled addons: storage-provisioner, default-storageclass I0221 09:03:24.810310 421870 addons.go:417] enableAddons completed in 550.081796ms I0221 09:03:26.357757 421870 node_ready.go:58] node "kindnet-20220221084934-6550" has status "Ready":"False" I0221 09:03:23.499957 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:25.998718 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:26.503352 227869 pod_ready.go:81] duration metric: took 4m0.410759109s waiting for pod "weave-net-dgkzh" in "kube-system" namespace to be "Ready" ... E0221 09:03:26.503375 227869 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition I0221 09:03:26.503381 227869 pod_ready.go:38] duration metric: took 8m1.440836229s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 09:03:26.503404 227869 api_server.go:51] waiting for apiserver process to appear ... I0221 09:03:26.505928 227869 out.go:176] W0221 09:03:26.506107 227869 out.go:241] X Exiting due to K8S_APISERVER_MISSING: wait 5m0s for node: wait for apiserver proc: apiserver process never appeared W0221 09:03:26.506213 227869 out.go:241] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled W0221 09:03:26.506230 227869 out.go:241] * Related issues: W0221 09:03:26.506275 227869 out.go:241] - https://github.com/kubernetes/minikube/issues/4536 W0221 09:03:26.506318 227869 out.go:241] - https://github.com/kubernetes/minikube/issues/6014 * * ==> Docker <== * -- Logs begin at Mon 2022-02-21 08:54:55 UTC, end at Mon 2022-02-21 09:03:27 UTC. -- Feb 21 09:02:14 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:02:14.697367580Z" level=info msg="ignoring event" container=88ce05954468ab57698064df19cf814c5ede1ec4eda27856f100378261b791f4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:02:17 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:02:17.672194107Z" level=info msg="ignoring event" container=252e0bd2b3c27c7ffd30a4ca63fed9b0d2f1690abe0113a3cb903e33da27acb5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:02:20 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:02:20.864605647Z" level=info msg="ignoring event" container=737ed589748b3a80ba14cd9553476955a9d4a2ab6192db2aeae03bf1dd75d9b2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:02:23 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:02:23.331864889Z" level=info msg="ignoring event" container=e6a494216854192163347189af4fefab83f8a968046164a80dd1cd46b19ab14c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:02:25 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:02:25.884977063Z" level=info msg="ignoring event" container=811e0c6b7450a8178c1d5b10099f8c9d11b74e185a5501f60dacdc8738c3abd1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:02:29 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:02:29.054733485Z" level=info msg="ignoring event" container=15dce3c4013039e752d3e075d8a3e2a64a7a9e7d05a442eff6d467dfeff4b8ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:02:31 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:02:31.809103204Z" level=info msg="ignoring event" container=c5ecf5a037334fd5a00bd55c6db7f11781a556e1d9de94aee6381e1cf698bd46 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:02:34 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:02:34.872639462Z" level=info msg="ignoring event" container=a5cb2f7b49a0751b3de6dc9ec2932815786dc892007094d62319698b30a8152a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:02:38 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:02:38.064951004Z" level=info msg="ignoring event" container=24fa7de1fd8e102016ef8b0ed78d538891c402212ea8790fd304e7f62f49ef27 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:02:40 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:02:40.569723358Z" level=info msg="ignoring event" container=282e8b8957134fd2222ecb0e1bc665e6567fea1576d36c09c217b3670260eb08 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:02:42 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:02:42.904622303Z" level=info msg="ignoring event" container=dba636f77f8b3865a153b5f9eab718078e2e3a542eeff5549b5eb0ddf5a7a132 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:02:45 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:02:45.922787256Z" level=info msg="ignoring event" container=8f15cce7254b970e81801c968279312efcf21fbbbd5116d6b4a04cc5ec89f7a1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:02:49 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:02:49.297901218Z" level=info msg="ignoring event" container=f69bd826c4b20f09ae642f28d49e95246da6a7a2b73468e78ba3b4b490dba308 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:02:55 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:02:55.059571054Z" level=info msg="ignoring event" container=af47f6a40b5dff44f2c94228f44b8340813ccea7507a5fed92f4755a84029496 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:02:58 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:02:58.109356523Z" level=info msg="ignoring event" container=796295c4646377148ffd3aa593767ade48b73e56a3c96b7f27f15178bc7fb107 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:01 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:03:01.372423917Z" level=info msg="ignoring event" container=1f053a4c5730668c77e0ca0ad5c264c0f70994a52ce07ecbf5aedb22a54c72ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:03 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:03:03.956957575Z" level=info msg="ignoring event" container=46e5973497db31d649e643cb75a26b01252543cdd4fc8bbf99289586091c27a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:06 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:03:06.557797427Z" level=info msg="ignoring event" container=1aa26186412e915010c101980740e3591abd87c8fb665ce06ebbe089860ccd65 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:09 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:03:09.276258472Z" level=info msg="ignoring event" container=99957ac19e4b1e4a8691fa770f14ae400ee0dbd763658da2036c119b3d3f1f67 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:12 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:03:12.157275454Z" level=info msg="ignoring event" container=25ade4adf1d675cfd74595935a4a73a35836ac781e002460f21ba05679b754bb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:14 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:03:14.641343436Z" level=info msg="ignoring event" container=d579e5bc4d63858758e923a26b9e4162c525ed3920c38b3cac0c6dbd1168db6a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:17 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:03:17.283661320Z" level=info msg="ignoring event" container=09ca64fdcc403616320dad9883db51d84abc802dc6d5ca64ab5114f486e96873 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:19 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:03:19.460879613Z" level=info msg="ignoring event" container=881f77bfde6c7ec9da43ad1ecd02b5d722a5e1d2337cf707a09fa8072bbebc34 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:22 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:03:22.643034928Z" level=info msg="ignoring event" container=008864fcc48472d474c2e17424b20325fd7f5f5105d22d53f0b9a64690ef6523 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:25 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:03:25.647622603Z" level=info msg="ignoring event" container=fd5e26f3c4e0a622405aade59195b54dd258c3624064b92ee473676038f8d1c0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 482d2370d9581 e9dd2f85e51b4 2 minutes ago Exited weave 5 4e9007562a877 9e817ce47282a 6e38f40d628db 2 minutes ago Exited storage-provisioner 5 6eb8806cfc7a2 5b965740dd4ad weaveworks/weave-npc@sha256:0f6166e000faa500ccc0df53caae17edd3110590b7b159007a5ea727cdfb1cef 7 minutes ago Running weave-npc 0 4e9007562a877 0cb891515343e 2114245ec4d6b 8 minutes ago Running kube-proxy 0 15ac6f927e0ae a014e0a91eccb 62930710c9634 8 minutes ago Running kube-apiserver 0 770b587b6be71 b59c9c533c60c aceacb6244f9f 8 minutes ago Running kube-scheduler 0 f9d0fcb630265 6039583378dbe 25f8c7f3da61c 8 minutes ago Running etcd 0 56ca1829f5b89 93b77eb808339 25444908517a5 8 minutes ago Running kube-controller-manager 0 c35f5c04ef1df * * ==> describe nodes <== * Name: custom-weave-20220221084934-6550 Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=custom-weave-20220221084934-6550 kubernetes.io/os=linux minikube.k8s.io/commit=9a4c39b14752448ee8b5da7f8bb397c2a16d9ea9 minikube.k8s.io/name=custom-weave-20220221084934-6550 minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2022_02_21T08_55_12_0700 minikube.k8s.io/version=v1.25.1 node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 21 Feb 2022 08:55:08 +0000 Taints: Unschedulable: false Lease: HolderIdentity: custom-weave-20220221084934-6550 AcquireTime: RenewTime: Mon, 21 Feb 2022 09:03:21 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Mon, 21 Feb 2022 09:00:47 +0000 Mon, 21 Feb 2022 08:55:05 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Mon, 21 Feb 2022 09:00:47 +0000 Mon, 21 Feb 2022 08:55:05 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Mon, 21 Feb 2022 09:00:47 +0000 Mon, 21 Feb 2022 08:55:05 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Mon, 21 Feb 2022 09:00:47 +0000 Mon, 21 Feb 2022 08:55:21 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.58.2 Hostname: custom-weave-20220221084934-6550 Capacity: cpu: 8 ephemeral-storage: 304695084Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32874648Ki pods: 110 Allocatable: cpu: 8 ephemeral-storage: 304695084Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32874648Ki pods: 110 System Info: Machine ID: b6a262faae404a5db719705fd34b5c8b System UUID: d8899eaa-a145-497e-bd02-b1e6b9bda954 Boot ID: 36f9c729-2a96-4807-bb74-314dc2113999 Kernel Version: 5.11.0-1029-gcp OS Image: Ubuntu 20.04.2 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.12 Kubelet Version: v1.23.4 Kube-Proxy Version: v1.23.4 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (8 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- kube-system coredns-64897985d-fw5hd 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 8m3s kube-system etcd-custom-weave-20220221084934-6550 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 8m16s kube-system kube-apiserver-custom-weave-20220221084934-6550 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m16s kube-system kube-controller-manager-custom-weave-20220221084934-6550 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m16s kube-system kube-proxy-q4stn 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m3s kube-system kube-scheduler-custom-weave-20220221084934-6550 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m16s kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m1s kube-system weave-net-dgkzh 20m (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m3s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 770m (9%!)(MISSING) 0 (0%!)(MISSING) memory 170Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 8m2s kube-proxy Normal NodeHasSufficientMemory 8m23s (x4 over 8m23s) kubelet Node custom-weave-20220221084934-6550 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 8m23s (x4 over 8m23s) kubelet Node custom-weave-20220221084934-6550 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 8m23s (x4 over 8m23s) kubelet Node custom-weave-20220221084934-6550 status is now: NodeHasSufficientPID Normal NodeHasSufficientMemory 8m16s kubelet Node custom-weave-20220221084934-6550 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 8m16s kubelet Node custom-weave-20220221084934-6550 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 8m16s kubelet Node custom-weave-20220221084934-6550 status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 8m16s kubelet Updated Node Allocatable limit across pods Normal Starting 8m16s kubelet Starting kubelet. Normal NodeReady 8m6s kubelet Node custom-weave-20220221084934-6550 status is now: NodeReady * * ==> dmesg <== * [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 46 c9 e8 63 60 1b 08 06 [ +5.838269] IPv4: martian source 10.85.0.156 from 10.85.0.156, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff ee 44 32 6b 48 e8 08 06 [ +3.065442] IPv4: martian source 10.85.0.157 from 10.85.0.157, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 4a 26 81 0f 06 4a 08 06 [Feb21 09:03] IPv4: martian source 10.85.0.158 from 10.85.0.158, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff da 80 7d 07 f0 ca 08 06 [ +2.561210] IPv4: martian source 10.85.0.159 from 10.85.0.159, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 23 e1 c4 83 2c 08 06 [ +2.615653] IPv4: martian source 10.85.0.160 from 10.85.0.160, on dev eth0 [ +0.000005] ll header: 00000000: ff ff ff ff ff ff 8e 64 41 7f 5e 31 08 06 [ +2.733452] IPv4: martian source 10.85.0.161 from 10.85.0.161, on dev eth0 [ +0.000008] ll header: 00000000: ff ff ff ff ff ff da fc d1 c9 f2 2a 08 06 [ +2.883194] IPv4: martian source 10.85.0.162 from 10.85.0.162, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 1a 5e d5 29 ea a8 08 06 [ +2.455339] IPv4: martian source 10.85.0.163 from 10.85.0.163, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff 9e 50 c8 60 43 de 08 06 [ +2.674144] IPv4: martian source 10.85.0.164 from 10.85.0.164, on dev eth0 [ +0.000008] ll header: 00000000: ff ff ff ff ff ff ae b8 d8 5c 06 86 08 06 [ +2.173451] IPv4: martian source 10.85.0.165 from 10.85.0.165, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff b6 23 71 a2 17 13 08 06 [ +3.191430] IPv4: martian source 10.85.0.166 from 10.85.0.166, on dev eth0 [ +0.000008] ll header: 00000000: ff ff ff ff ff ff fa ee 02 4a fe dc 08 06 [ +3.010319] IPv4: martian source 10.85.0.167 from 10.85.0.167, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff be 1f 49 7a 27 ae 08 06 * * ==> etcd [6039583378db] <== * {"level":"info","ts":"2022-02-21T08:55:05.914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"} {"level":"info","ts":"2022-02-21T08:55:05.914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"} {"level":"info","ts":"2022-02-21T08:55:05.914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"} {"level":"info","ts":"2022-02-21T08:55:05.914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"} {"level":"info","ts":"2022-02-21T08:55:05.915Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T08:55:05.916Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T08:55:05.916Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T08:55:05.916Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T08:55:05.916Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:custom-weave-20220221084934-6550 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"} {"level":"info","ts":"2022-02-21T08:55:05.916Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2022-02-21T08:55:05.917Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2022-02-21T08:55:05.919Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"} {"level":"info","ts":"2022-02-21T08:55:05.919Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"} {"level":"info","ts":"2022-02-21T08:55:05.919Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"} {"level":"info","ts":"2022-02-21T08:55:05.920Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"} {"level":"warn","ts":"2022-02-21T08:55:22.981Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"113.272633ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/bootstrap-signer\" ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-02-21T08:55:22.982Z","caller":"traceutil/trace.go:171","msg":"trace[870101966] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/bootstrap-signer; range_end:; response_count:0; response_revision:371; }","duration":"113.405634ms","start":"2022-02-21T08:55:22.868Z","end":"2022-02-21T08:55:22.981Z","steps":["trace[870101966] 'range keys from in-memory index tree' (duration: 113.190449ms)"],"step_count":1} {"level":"warn","ts":"2022-02-21T08:55:40.058Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"167.151839ms","expected-duration":"100ms","prefix":"","request":"header: txn: success:> failure: >>","response":"size:16"} {"level":"info","ts":"2022-02-21T08:55:40.058Z","caller":"traceutil/trace.go:171","msg":"trace[1744569559] linearizableReadLoop","detail":"{readStateIndex:513; appliedIndex:512; }","duration":"178.043871ms","start":"2022-02-21T08:55:39.880Z","end":"2022-02-21T08:55:40.058Z","steps":["trace[1744569559] 'read index received' (duration: 10.359817ms)","trace[1744569559] 'applied index is now lower than readState.Index' (duration: 167.683229ms)"],"step_count":2} {"level":"warn","ts":"2022-02-21T08:55:40.059Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"178.172127ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-02-21T08:55:40.059Z","caller":"traceutil/trace.go:171","msg":"trace[1187793531] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:497; }","duration":"178.228302ms","start":"2022-02-21T08:55:39.880Z","end":"2022-02-21T08:55:40.059Z","steps":["trace[1187793531] 'agreement among raft nodes before linearized reading' (duration: 178.118124ms)"],"step_count":1} {"level":"info","ts":"2022-02-21T08:55:40.059Z","caller":"traceutil/trace.go:171","msg":"trace[1378877095] transaction","detail":"{read_only:false; response_revision:497; number_of_response:1; }","duration":"331.875147ms","start":"2022-02-21T08:55:39.727Z","end":"2022-02-21T08:55:40.059Z","steps":["trace[1378877095] 'process raft request' (duration: 164.196319ms)","trace[1378877095] 'compare' (duration: 167.060655ms)"],"step_count":2} {"level":"warn","ts":"2022-02-21T08:55:40.059Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-02-21T08:55:39.727Z","time spent":"332.125837ms","remote":"127.0.0.1:51898","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":39,"request content":"compare: success:> failure: >"} {"level":"warn","ts":"2022-02-21T09:02:51.241Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"246.5049ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/weave-net-dgkzh\" ","response":"range_response_count:1 size:6950"} {"level":"info","ts":"2022-02-21T09:02:51.241Z","caller":"traceutil/trace.go:171","msg":"trace[2034250247] range","detail":"{range_begin:/registry/pods/kube-system/weave-net-dgkzh; range_end:; response_count:1; response_revision:682; }","duration":"246.625029ms","start":"2022-02-21T09:02:50.994Z","end":"2022-02-21T09:02:51.241Z","steps":["trace[2034250247] 'range keys from in-memory index tree' (duration: 246.347077ms)"],"step_count":1} * * ==> kernel <== * 09:03:28 up 46 min, 0 users, load average: 4.47, 4.47, 3.60 Linux custom-weave-20220221084934-6550 5.11.0-1029-gcp #33~20.04.3-Ubuntu SMP Tue Jan 18 12:03:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.2 LTS" * * ==> kube-apiserver [a014e0a91ecc] <== * I0221 08:55:08.402954 1 apf_controller.go:322] Running API Priority and Fairness config worker I0221 08:55:08.403014 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0221 08:55:08.403138 1 shared_informer.go:247] Caches are synced for crd-autoregister I0221 08:55:08.403155 1 cache.go:39] Caches are synced for autoregister controller I0221 08:55:08.403411 1 cache.go:39] Caches are synced for AvailableConditionController controller I0221 08:55:08.407626 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller I0221 08:55:09.202333 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0221 08:55:09.202360 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0221 08:55:09.211893 1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000 I0221 08:55:09.214735 1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000 I0221 08:55:09.214752 1 storage_scheduling.go:109] all system priority classes are created successfully or already exist. I0221 08:55:09.609634 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0221 08:55:09.641913 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0221 08:55:09.729729 1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1] W0221 08:55:09.734626 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2] I0221 08:55:09.735614 1 controller.go:611] quota admission added evaluator for: endpoints I0221 08:55:09.739395 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io I0221 08:55:10.417856 1 controller.go:611] quota admission added evaluator for: serviceaccounts I0221 08:55:10.963831 1 controller.go:611] quota admission added evaluator for: deployments.apps I0221 08:55:10.972339 1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10] I0221 08:55:10.982337 1 controller.go:611] quota admission added evaluator for: daemonsets.apps I0221 08:55:11.205312 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io I0221 08:55:24.024100 1 controller.go:611] quota admission added evaluator for: replicasets.apps I0221 08:55:24.173602 1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps I0221 08:55:25.214967 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io * * ==> kube-controller-manager [93b77eb80833] <== * I0221 08:55:23.408050 1 node_lifecycle_controller.go:1213] Controller detected that zone is now in state Normal. I0221 08:55:23.408060 1 event.go:294] "Event occurred" object="custom-weave-20220221084934-6550" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node custom-weave-20220221084934-6550 event: Registered Node custom-weave-20220221084934-6550 in Controller" I0221 08:55:23.431888 1 shared_informer.go:247] Caches are synced for TTL I0221 08:55:23.450654 1 shared_informer.go:247] Caches are synced for endpoint_slice I0221 08:55:23.470910 1 shared_informer.go:247] Caches are synced for GC I0221 08:55:23.473094 1 shared_informer.go:247] Caches are synced for persistent volume I0221 08:55:23.480469 1 shared_informer.go:247] Caches are synced for resource quota I0221 08:55:23.482934 1 shared_informer.go:247] Caches are synced for node I0221 08:55:23.482968 1 range_allocator.go:173] Starting range CIDR allocator I0221 08:55:23.482973 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator I0221 08:55:23.482981 1 shared_informer.go:247] Caches are synced for cidrallocator I0221 08:55:23.490340 1 range_allocator.go:374] Set node custom-weave-20220221084934-6550 PodCIDR to [10.244.0.0/24] I0221 08:55:23.521725 1 shared_informer.go:247] Caches are synced for stateful set I0221 08:55:23.523687 1 shared_informer.go:247] Caches are synced for resource quota I0221 08:55:23.526912 1 shared_informer.go:247] Caches are synced for daemon sets I0221 08:55:23.898780 1 shared_informer.go:247] Caches are synced for garbage collector I0221 08:55:23.898816 1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0221 08:55:23.903955 1 shared_informer.go:247] Caches are synced for garbage collector I0221 08:55:24.026313 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2" I0221 08:55:24.181386 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-q4stn" I0221 08:55:24.183297 1 event.go:294] "Event occurred" object="kube-system/weave-net" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: weave-net-dgkzh" I0221 08:55:24.276312 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-kn627" I0221 08:55:24.280740 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-fw5hd" I0221 08:55:24.550870 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1" I0221 08:55:24.556696 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-kn627" * * ==> kube-proxy [0cb891515343] <== * I0221 08:55:25.130891 1 node.go:163] Successfully retrieved node IP: 192.168.58.2 I0221 08:55:25.130972 1 server_others.go:138] "Detected node IP" address="192.168.58.2" I0221 08:55:25.131023 1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode="" I0221 08:55:25.207154 1 server_others.go:206] "Using iptables Proxier" I0221 08:55:25.207194 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4 I0221 08:55:25.207207 1 server_others.go:214] "Creating dualStackProxier for iptables" I0221 08:55:25.207249 1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6" I0221 08:55:25.207630 1 server.go:656] "Version info" version="v1.23.4" I0221 08:55:25.212832 1 config.go:317] "Starting service config controller" I0221 08:55:25.213026 1 shared_informer.go:240] Waiting for caches to sync for service config I0221 08:55:25.212946 1 config.go:226] "Starting endpoint slice config controller" I0221 08:55:25.213063 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I0221 08:55:25.313289 1 shared_informer.go:247] Caches are synced for endpoint slice config I0221 08:55:25.313423 1 shared_informer.go:247] Caches are synced for service config * * ==> kube-scheduler [b59c9c533c60] <== * E0221 08:55:08.322642 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0221 08:55:08.322648 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0221 08:55:08.322466 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0221 08:55:08.322767 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0221 08:55:08.322780 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0221 08:55:08.322781 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope W0221 08:55:08.322946 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0221 08:55:08.322984 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0221 08:55:08.323112 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0221 08:55:08.323155 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0221 08:55:08.323175 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0221 08:55:08.323206 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0221 08:55:09.194703 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0221 08:55:09.194743 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0221 08:55:09.197585 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0221 08:55:09.197608 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0221 08:55:09.208788 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0221 08:55:09.208822 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0221 08:55:09.269483 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0221 08:55:09.269512 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0221 08:55:09.303358 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0221 08:55:09.303386 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0221 08:55:09.349092 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0221 08:55:09.349130 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope I0221 08:55:09.720186 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Mon 2022-02-21 08:54:55 UTC, end at Mon 2022-02-21 09:03:28 UTC. -- Feb 21 09:03:20 custom-weave-20220221084934-6550 kubelet[1937]: I0221 09:03:20.406887 1937 scope.go:110] "RemoveContainer" containerID="482d2370d9581598d5f4c8efcbda364af379d0ee5707ba13f454e267732c045b" Feb 21 09:03:20 custom-weave-20220221084934-6550 kubelet[1937]: E0221 09:03:20.407453 1937 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"weave\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=weave pod=weave-net-dgkzh_kube-system(ba48aae4-721f-4a19-a470-782f7c69d914)\"" pod="kube-system/weave-net-dgkzh" podUID=ba48aae4-721f-4a19-a470-782f7c69d914 Feb 21 09:03:20 custom-weave-20220221084934-6550 kubelet[1937]: I0221 09:03:20.415726 1937 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"coredns-64897985d-fw5hd_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"881f77bfde6c7ec9da43ad1ecd02b5d722a5e1d2337cf707a09fa8072bbebc34\"" Feb 21 09:03:20 custom-weave-20220221084934-6550 kubelet[1937]: I0221 09:03:20.418239 1937 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="881f77bfde6c7ec9da43ad1ecd02b5d722a5e1d2337cf707a09fa8072bbebc34" Feb 21 09:03:20 custom-weave-20220221084934-6550 kubelet[1937]: I0221 09:03:20.419838 1937 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"881f77bfde6c7ec9da43ad1ecd02b5d722a5e1d2337cf707a09fa8072bbebc34\"" Feb 21 09:03:22 custom-weave-20220221084934-6550 kubelet[1937]: E0221 09:03:22.546109 1937 cni.go:362] "Error adding pod to network" err="failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-64897985d-fw5hd" podSandboxID={Type:docker ID:008864fcc48472d474c2e17424b20325fd7f5f5105d22d53f0b9a64690ef6523} podNetnsPath="/proc/28845/ns/net" networkType="bridge" networkName="crio" Feb 21 09:03:22 custom-weave-20220221084934-6550 kubelet[1937]: E0221 09:03:22.610368 1937 cni.go:381] "Error deleting pod from network" err="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.166 -j CNI-553e67149ecc707c6384d5f7 -m comment --comment name: \"crio\" id: \"008864fcc48472d474c2e17424b20325fd7f5f5105d22d53f0b9a64690ef6523\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-553e67149ecc707c6384d5f7':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" pod="kube-system/coredns-64897985d-fw5hd" podSandboxID={Type:docker ID:008864fcc48472d474c2e17424b20325fd7f5f5105d22d53f0b9a64690ef6523} podNetnsPath="/proc/28845/ns/net" networkType="bridge" networkName="crio" Feb 21 09:03:22 custom-weave-20220221084934-6550 kubelet[1937]: E0221 09:03:22.663482 1937 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"008864fcc48472d474c2e17424b20325fd7f5f5105d22d53f0b9a64690ef6523\" network for pod \"coredns-64897985d-fw5hd\": networkPlugin cni failed to set up pod \"coredns-64897985d-fw5hd_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"008864fcc48472d474c2e17424b20325fd7f5f5105d22d53f0b9a64690ef6523\" network for pod \"coredns-64897985d-fw5hd\": networkPlugin cni failed to teardown pod \"coredns-64897985d-fw5hd_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.166 -j CNI-553e67149ecc707c6384d5f7 -m comment --comment name: \"crio\" id: \"008864fcc48472d474c2e17424b20325fd7f5f5105d22d53f0b9a64690ef6523\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-553e67149ecc707c6384d5f7':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" Feb 21 09:03:22 custom-weave-20220221084934-6550 kubelet[1937]: E0221 09:03:22.663574 1937 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"008864fcc48472d474c2e17424b20325fd7f5f5105d22d53f0b9a64690ef6523\" network for pod \"coredns-64897985d-fw5hd\": networkPlugin cni failed to set up pod \"coredns-64897985d-fw5hd_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"008864fcc48472d474c2e17424b20325fd7f5f5105d22d53f0b9a64690ef6523\" network for pod \"coredns-64897985d-fw5hd\": networkPlugin cni failed to teardown pod \"coredns-64897985d-fw5hd_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.166 -j CNI-553e67149ecc707c6384d5f7 -m comment --comment name: \"crio\" id: \"008864fcc48472d474c2e17424b20325fd7f5f5105d22d53f0b9a64690ef6523\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-553e67149ecc707c6384d5f7':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/coredns-64897985d-fw5hd" Feb 21 09:03:22 custom-weave-20220221084934-6550 kubelet[1937]: E0221 09:03:22.663623 1937 kuberuntime_manager.go:832] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"008864fcc48472d474c2e17424b20325fd7f5f5105d22d53f0b9a64690ef6523\" network for pod \"coredns-64897985d-fw5hd\": networkPlugin cni failed to set up pod \"coredns-64897985d-fw5hd_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"008864fcc48472d474c2e17424b20325fd7f5f5105d22d53f0b9a64690ef6523\" network for pod \"coredns-64897985d-fw5hd\": networkPlugin cni failed to teardown pod \"coredns-64897985d-fw5hd_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.166 -j CNI-553e67149ecc707c6384d5f7 -m comment --comment name: \"crio\" id: \"008864fcc48472d474c2e17424b20325fd7f5f5105d22d53f0b9a64690ef6523\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-553e67149ecc707c6384d5f7':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/coredns-64897985d-fw5hd" Feb 21 09:03:22 custom-weave-20220221084934-6550 kubelet[1937]: E0221 09:03:22.663699 1937 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-64897985d-fw5hd_kube-system(442952fb-cceb-4c88-88d9-f45c8b015e1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-64897985d-fw5hd_kube-system(442952fb-cceb-4c88-88d9-f45c8b015e1a)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"008864fcc48472d474c2e17424b20325fd7f5f5105d22d53f0b9a64690ef6523\\\" network for pod \\\"coredns-64897985d-fw5hd\\\": networkPlugin cni failed to set up pod \\\"coredns-64897985d-fw5hd_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"008864fcc48472d474c2e17424b20325fd7f5f5105d22d53f0b9a64690ef6523\\\" network for pod \\\"coredns-64897985d-fw5hd\\\": networkPlugin cni failed to teardown pod \\\"coredns-64897985d-fw5hd_kube-system\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.166 -j CNI-553e67149ecc707c6384d5f7 -m comment --comment name: \\\"crio\\\" id: \\\"008864fcc48472d474c2e17424b20325fd7f5f5105d22d53f0b9a64690ef6523\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-553e67149ecc707c6384d5f7':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/coredns-64897985d-fw5hd" podUID=442952fb-cceb-4c88-88d9-f45c8b015e1a Feb 21 09:03:23 custom-weave-20220221084934-6550 kubelet[1937]: I0221 09:03:23.446331 1937 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"coredns-64897985d-fw5hd_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"008864fcc48472d474c2e17424b20325fd7f5f5105d22d53f0b9a64690ef6523\"" Feb 21 09:03:23 custom-weave-20220221084934-6550 kubelet[1937]: I0221 09:03:23.449628 1937 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="008864fcc48472d474c2e17424b20325fd7f5f5105d22d53f0b9a64690ef6523" Feb 21 09:03:23 custom-weave-20220221084934-6550 kubelet[1937]: I0221 09:03:23.451184 1937 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"008864fcc48472d474c2e17424b20325fd7f5f5105d22d53f0b9a64690ef6523\"" Feb 21 09:03:24 custom-weave-20220221084934-6550 kubelet[1937]: I0221 09:03:24.407514 1937 scope.go:110] "RemoveContainer" containerID="9e817ce47282a2823395a8362af2a42e4cfda5432c37521da922d4379ecc1571" Feb 21 09:03:24 custom-weave-20220221084934-6550 kubelet[1937]: E0221 09:03:24.407806 1937 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(07cca2bf-78dc-4768-b83e-be6bd78df3a2)\"" pod="kube-system/storage-provisioner" podUID=07cca2bf-78dc-4768-b83e-be6bd78df3a2 Feb 21 09:03:25 custom-weave-20220221084934-6550 kubelet[1937]: E0221 09:03:25.551793 1937 cni.go:362] "Error adding pod to network" err="failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-64897985d-fw5hd" podSandboxID={Type:docker ID:fd5e26f3c4e0a622405aade59195b54dd258c3624064b92ee473676038f8d1c0} podNetnsPath="/proc/29008/ns/net" networkType="bridge" networkName="crio" Feb 21 09:03:25 custom-weave-20220221084934-6550 kubelet[1937]: E0221 09:03:25.618613 1937 cni.go:381] "Error deleting pod from network" err="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.167 -j CNI-bfea267026914d8f6d28005f -m comment --comment name: \"crio\" id: \"fd5e26f3c4e0a622405aade59195b54dd258c3624064b92ee473676038f8d1c0\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-bfea267026914d8f6d28005f':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" pod="kube-system/coredns-64897985d-fw5hd" podSandboxID={Type:docker ID:fd5e26f3c4e0a622405aade59195b54dd258c3624064b92ee473676038f8d1c0} podNetnsPath="/proc/29008/ns/net" networkType="bridge" networkName="crio" Feb 21 09:03:25 custom-weave-20220221084934-6550 kubelet[1937]: E0221 09:03:25.663706 1937 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"fd5e26f3c4e0a622405aade59195b54dd258c3624064b92ee473676038f8d1c0\" network for pod \"coredns-64897985d-fw5hd\": networkPlugin cni failed to set up pod \"coredns-64897985d-fw5hd_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"fd5e26f3c4e0a622405aade59195b54dd258c3624064b92ee473676038f8d1c0\" network for pod \"coredns-64897985d-fw5hd\": networkPlugin cni failed to teardown pod \"coredns-64897985d-fw5hd_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.167 -j CNI-bfea267026914d8f6d28005f -m comment --comment name: \"crio\" id: \"fd5e26f3c4e0a622405aade59195b54dd258c3624064b92ee473676038f8d1c0\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-bfea267026914d8f6d28005f':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" Feb 21 09:03:25 custom-weave-20220221084934-6550 kubelet[1937]: E0221 09:03:25.663774 1937 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"fd5e26f3c4e0a622405aade59195b54dd258c3624064b92ee473676038f8d1c0\" network for pod \"coredns-64897985d-fw5hd\": networkPlugin cni failed to set up pod \"coredns-64897985d-fw5hd_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"fd5e26f3c4e0a622405aade59195b54dd258c3624064b92ee473676038f8d1c0\" network for pod \"coredns-64897985d-fw5hd\": networkPlugin cni failed to teardown pod \"coredns-64897985d-fw5hd_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.167 -j CNI-bfea267026914d8f6d28005f -m comment --comment name: \"crio\" id: \"fd5e26f3c4e0a622405aade59195b54dd258c3624064b92ee473676038f8d1c0\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-bfea267026914d8f6d28005f':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/coredns-64897985d-fw5hd" Feb 21 09:03:25 custom-weave-20220221084934-6550 kubelet[1937]: E0221 09:03:25.663802 1937 kuberuntime_manager.go:832] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"fd5e26f3c4e0a622405aade59195b54dd258c3624064b92ee473676038f8d1c0\" network for pod \"coredns-64897985d-fw5hd\": networkPlugin cni failed to set up pod \"coredns-64897985d-fw5hd_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"fd5e26f3c4e0a622405aade59195b54dd258c3624064b92ee473676038f8d1c0\" network for pod \"coredns-64897985d-fw5hd\": networkPlugin cni failed to teardown pod \"coredns-64897985d-fw5hd_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.167 -j CNI-bfea267026914d8f6d28005f -m comment --comment name: \"crio\" id: \"fd5e26f3c4e0a622405aade59195b54dd258c3624064b92ee473676038f8d1c0\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-bfea267026914d8f6d28005f':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/coredns-64897985d-fw5hd" Feb 21 09:03:25 custom-weave-20220221084934-6550 kubelet[1937]: E0221 09:03:25.663871 1937 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-64897985d-fw5hd_kube-system(442952fb-cceb-4c88-88d9-f45c8b015e1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-64897985d-fw5hd_kube-system(442952fb-cceb-4c88-88d9-f45c8b015e1a)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"fd5e26f3c4e0a622405aade59195b54dd258c3624064b92ee473676038f8d1c0\\\" network for pod \\\"coredns-64897985d-fw5hd\\\": networkPlugin cni failed to set up pod \\\"coredns-64897985d-fw5hd_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"fd5e26f3c4e0a622405aade59195b54dd258c3624064b92ee473676038f8d1c0\\\" network for pod \\\"coredns-64897985d-fw5hd\\\": networkPlugin cni failed to teardown pod \\\"coredns-64897985d-fw5hd_kube-system\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.167 -j CNI-bfea267026914d8f6d28005f -m comment --comment name: \\\"crio\\\" id: \\\"fd5e26f3c4e0a622405aade59195b54dd258c3624064b92ee473676038f8d1c0\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-bfea267026914d8f6d28005f':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/coredns-64897985d-fw5hd" podUID=442952fb-cceb-4c88-88d9-f45c8b015e1a Feb 21 09:03:26 custom-weave-20220221084934-6550 kubelet[1937]: I0221 09:03:26.483358 1937 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"coredns-64897985d-fw5hd_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"fd5e26f3c4e0a622405aade59195b54dd258c3624064b92ee473676038f8d1c0\"" Feb 21 09:03:26 custom-weave-20220221084934-6550 kubelet[1937]: I0221 09:03:26.487110 1937 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="fd5e26f3c4e0a622405aade59195b54dd258c3624064b92ee473676038f8d1c0" Feb 21 09:03:26 custom-weave-20220221084934-6550 kubelet[1937]: I0221 09:03:26.488542 1937 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"fd5e26f3c4e0a622405aade59195b54dd258c3624064b92ee473676038f8d1c0\"" * * ==> storage-provisioner [9e817ce47282] <== * I0221 09:00:55.543058 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... F0221 09:01:25.546638 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout -- /stdout -- helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p custom-weave-20220221084934-6550 -n custom-weave-20220221084934-6550 helpers_test.go:262: (dbg) Run: kubectl --context custom-weave-20220221084934-6550 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running helpers_test.go:271: non-running pods: coredns-64897985d-fw5hd helpers_test.go:273: ======> post-mortem[TestNetworkPlugins/group/custom-weave]: describe non-running pods <====== helpers_test.go:276: (dbg) Run: kubectl --context custom-weave-20220221084934-6550 describe pod coredns-64897985d-fw5hd E0221 09:03:29.174010 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory helpers_test.go:276: (dbg) Non-zero exit: kubectl --context custom-weave-20220221084934-6550 describe pod coredns-64897985d-fw5hd: exit status 1 (66.307701ms) ** stderr ** Error from server (NotFound): pods "coredns-64897985d-fw5hd" not found ** /stderr ** helpers_test.go:278: kubectl --context custom-weave-20220221084934-6550 describe pod coredns-64897985d-fw5hd: exit status 1 helpers_test.go:176: Cleaning up "custom-weave-20220221084934-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p custom-weave-20220221084934-6550 helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p custom-weave-20220221084934-6550: (2.860073013s) === CONT TestNetworkPlugins/group/enable-default-cni === RUN TestNetworkPlugins/group/enable-default-cni/Start net_test.go:99: (dbg) Run: out/minikube-linux-amd64 start -p enable-default-cni-20220221084933-6550 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker --container-runtime=docker === CONT TestNetworkPlugins/group/kindnet/Start net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-20220221084934-6550 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker --container-runtime=docker: (48.670320568s) === RUN TestNetworkPlugins/group/kindnet/ControllerPod net_test.go:107: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ... helpers_test.go:343: "kindnet-b7vpv" [70703c09-41bc-4c02-9ccf-df45333fbc70] Running net_test.go:107: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.014904478s === RUN TestNetworkPlugins/group/kindnet/KubeletFlags net_test.go:120: (dbg) Run: out/minikube-linux-amd64 ssh -p kindnet-20220221084934-6550 "pgrep -a kubelet" === RUN TestNetworkPlugins/group/kindnet/NetCatPod net_test.go:132: (dbg) Run: kubectl --context kindnet-20220221084934-6550 replace --force -f testdata/netcat-deployment.yaml net_test.go:146: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ... helpers_test.go:343: "netcat-668db85669-lcmt9" [0fd0efca-25d3-42b8-b210-f9f1dd5821bd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils]) === CONT TestNetworkPlugins/group/auto/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.151381391s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** === CONT TestNetworkPlugins/group/calico/Start net_test.go:99: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-20220221084934-6550 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker --container-runtime=docker: exit status 80 (9m13.225436451s) -- stdout -- * [calico-20220221084934-6550] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64) - MINIKUBE_LOCATION=13641 - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube - MINIKUBE_BIN=out/minikube-linux-amd64 * Using the docker driver based on user configuration - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities * Starting control plane node calico-20220221084934-6550 in cluster calico-20220221084934-6550 * Pulling base image ... * Creating docker container (CPUs=2, Memory=2048MB) ... * Preparing Kubernetes v1.23.4 on Docker 20.10.12 ... - kubelet.housekeeping-interval=5m - Generating certificates and keys ... - Booting up control plane ... - Configuring RBAC rules ... * Configuring Calico (Container Networking Interface) ... * Verifying Kubernetes components... - Using image gcr.io/k8s-minikube/storage-provisioner:v5 * Enabled addons: default-storageclass, storage-provisioner -- /stdout -- ** stderr ** I0221 08:54:31.669336 223679 out.go:297] Setting OutFile to fd 1 ... I0221 08:54:31.669431 223679 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 08:54:31.669456 223679 out.go:310] Setting ErrFile to fd 2... I0221 08:54:31.669459 223679 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 08:54:31.669575 223679 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/bin I0221 08:54:31.669863 223679 out.go:304] Setting JSON to false I0221 08:54:31.671533 223679 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2226,"bootTime":1645431446,"procs":815,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"} I0221 08:54:31.671604 223679 start.go:122] virtualization: kvm guest I0221 08:54:31.674304 223679 out.go:176] * [calico-20220221084934-6550] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64) I0221 08:54:31.675747 223679 out.go:176] - MINIKUBE_LOCATION=13641 I0221 08:54:31.674505 223679 notify.go:193] Checking for updates... I0221 08:54:31.677072 223679 out.go:176] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true I0221 08:54:31.678381 223679 out.go:176] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 08:54:31.679665 223679 out.go:176] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube I0221 08:54:31.680895 223679 out.go:176] - MINIKUBE_BIN=out/minikube-linux-amd64 I0221 08:54:31.681490 223679 config.go:176] Loaded profile config "auto-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 08:54:31.681597 223679 config.go:176] Loaded profile config "cert-expiration-20220221085105-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 08:54:31.681682 223679 config.go:176] Loaded profile config "cilium-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 08:54:31.681731 223679 driver.go:344] Setting default libvirt URI to qemu:///system I0221 08:54:31.726270 223679 docker.go:132] docker version: linux-20.10.12 I0221 08:54:31.726387 223679 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 08:54:31.828014 223679 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2022-02-21 08:54:31.757670791 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 08:54:31.828153 223679 docker.go:237] overlay module found I0221 08:54:31.830095 223679 out.go:176] * Using the docker driver based on user configuration I0221 08:54:31.830122 223679 start.go:281] selected driver: docker I0221 08:54:31.830127 223679 start.go:798] validating driver "docker" against I0221 08:54:31.830150 223679 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} W0221 08:54:31.830216 223679 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. W0221 08:54:31.830236 223679 out.go:241] ! Your cgroup does not allow setting memory. ! Your cgroup does not allow setting memory. I0221 08:54:31.831700 223679 out.go:176] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities I0221 08:54:31.832312 223679 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 08:54:31.933660 223679 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2022-02-21 08:54:31.865164378 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 08:54:31.933812 223679 start_flags.go:288] no existing cluster config was found, will generate one from the flags I0221 08:54:31.933956 223679 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m I0221 08:54:31.933978 223679 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] I0221 08:54:31.933991 223679 cni.go:93] Creating CNI manager for "calico" I0221 08:54:31.934000 223679 start_flags.go:297] Found "Calico" CNI - setting NetworkPlugin=cni I0221 08:54:31.934009 223679 start_flags.go:302] config: {Name:calico-20220221084934-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:calico-20220221084934-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 08:54:31.936655 223679 out.go:176] * Starting control plane node calico-20220221084934-6550 in cluster calico-20220221084934-6550 I0221 08:54:31.936718 223679 cache.go:120] Beginning downloading kic base image for docker with docker I0221 08:54:31.938119 223679 out.go:176] * Pulling base image ... I0221 08:54:31.938156 223679 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 08:54:31.938186 223679 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4 I0221 08:54:31.938198 223679 cache.go:57] Caching tarball of preloaded images I0221 08:54:31.938250 223679 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon I0221 08:54:31.938441 223679 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0221 08:54:31.938462 223679 cache.go:60] Finished verifying existence of preloaded tar for v1.23.4 on docker I0221 08:54:31.938612 223679 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/config.json ... I0221 08:54:31.938638 223679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/config.json: {Name:mk6dfec3eeded4259016eef6692333e08748c03e Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:54:32.001614 223679 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon, skipping pull I0221 08:54:32.001646 223679 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 exists in daemon, skipping load I0221 08:54:32.001665 223679 cache.go:208] Successfully downloaded all kic artifacts I0221 08:54:32.001710 223679 start.go:313] acquiring machines lock for calico-20220221084934-6550: {Name:mk9bd20451a3b8275874174c12a3c8e8fcabb93f Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 08:54:32.001861 223679 start.go:317] acquired machines lock for "calico-20220221084934-6550" in 125.883µs I0221 08:54:32.001895 223679 start.go:89] Provisioning new machine with config: &{Name:calico-20220221084934-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:calico-20220221084934-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true} I0221 08:54:32.002014 223679 start.go:126] createHost starting for "" (driver="docker") I0221 08:54:32.004421 223679 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ... I0221 08:54:32.004718 223679 start.go:160] libmachine.API.Create for "calico-20220221084934-6550" (driver="docker") I0221 08:54:32.004755 223679 client.go:168] LocalClient.Create starting I0221 08:54:32.004831 223679 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem I0221 08:54:32.004868 223679 main.go:130] libmachine: Decoding PEM data... I0221 08:54:32.004896 223679 main.go:130] libmachine: Parsing certificate... I0221 08:54:32.004981 223679 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem I0221 08:54:32.005006 223679 main.go:130] libmachine: Decoding PEM data... I0221 08:54:32.005024 223679 main.go:130] libmachine: Parsing certificate... I0221 08:54:32.005451 223679 cli_runner.go:133] Run: docker network inspect calico-20220221084934-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0221 08:54:32.041628 223679 cli_runner.go:180] docker network inspect calico-20220221084934-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0221 08:54:32.041708 223679 network_create.go:254] running [docker network inspect calico-20220221084934-6550] to gather additional debugging logs... I0221 08:54:32.041731 223679 cli_runner.go:133] Run: docker network inspect calico-20220221084934-6550 W0221 08:54:32.081587 223679 cli_runner.go:180] docker network inspect calico-20220221084934-6550 returned with exit code 1 I0221 08:54:32.081619 223679 network_create.go:257] error running [docker network inspect calico-20220221084934-6550]: docker network inspect calico-20220221084934-6550: exit status 1 stdout: [] stderr: Error: No such network: calico-20220221084934-6550 I0221 08:54:32.081656 223679 network_create.go:259] output of [docker network inspect calico-20220221084934-6550]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: calico-20220221084934-6550 ** /stderr ** I0221 08:54:32.081716 223679 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 08:54:32.120427 223679 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-8af72e223855 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:58:a5:dd:c8}} I0221 08:54:32.121233 223679 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-3becfb688ac0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:ae:26:de:33}} I0221 08:54:32.122028 223679 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc000618270] misses:0} I0221 08:54:32.122088 223679 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}} I0221 08:54:32.122116 223679 network_create.go:106] attempt to create docker network calico-20220221084934-6550 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ... I0221 08:54:32.122177 223679 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220221084934-6550 I0221 08:54:32.217845 223679 network_create.go:90] docker network calico-20220221084934-6550 192.168.67.0/24 created I0221 08:54:32.217884 223679 kic.go:106] calculated static IP "192.168.67.2" for the "calico-20220221084934-6550" container I0221 08:54:32.217960 223679 cli_runner.go:133] Run: docker ps -a --format {{.Names}} I0221 08:54:32.260460 223679 cli_runner.go:133] Run: docker volume create calico-20220221084934-6550 --label name.minikube.sigs.k8s.io=calico-20220221084934-6550 --label created_by.minikube.sigs.k8s.io=true I0221 08:54:32.294046 223679 oci.go:102] Successfully created a docker volume calico-20220221084934-6550 I0221 08:54:32.294150 223679 cli_runner.go:133] Run: docker run --rm --name calico-20220221084934-6550-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220221084934-6550 --entrypoint /usr/bin/test -v calico-20220221084934-6550:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -d /var/lib I0221 08:54:32.998319 223679 oci.go:106] Successfully prepared a docker volume calico-20220221084934-6550 I0221 08:54:32.998383 223679 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 08:54:32.998411 223679 kic.go:179] Starting extracting preloaded images to volume ... I0221 08:54:32.998566 223679 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220221084934-6550:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir I0221 08:54:39.205880 223679 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220221084934-6550:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir: (6.207231146s) I0221 08:54:39.205919 223679 kic.go:188] duration metric: took 6.207506 seconds to extract preloaded images to volume W0221 08:54:39.205955 223679 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted. W0221 08:54:39.205964 223679 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. I0221 08:54:39.206012 223679 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'" I0221 08:54:39.302203 223679 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220221084934-6550 --name calico-20220221084934-6550 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220221084934-6550 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220221084934-6550 --network calico-20220221084934-6550 --ip 192.168.67.2 --volume calico-20220221084934-6550:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 I0221 08:54:39.751892 223679 cli_runner.go:133] Run: docker container inspect calico-20220221084934-6550 --format={{.State.Running}} I0221 08:54:39.788728 223679 cli_runner.go:133] Run: docker container inspect calico-20220221084934-6550 --format={{.State.Status}} I0221 08:54:39.827631 223679 cli_runner.go:133] Run: docker exec calico-20220221084934-6550 stat /var/lib/dpkg/alternatives/iptables I0221 08:54:39.899385 223679 oci.go:281] the created container "calico-20220221084934-6550" has a running status. I0221 08:54:39.899415 223679 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/calico-20220221084934-6550/id_rsa... I0221 08:54:40.325976 223679 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/calico-20220221084934-6550/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0221 08:54:40.437286 223679 cli_runner.go:133] Run: docker container inspect calico-20220221084934-6550 --format={{.State.Status}} I0221 08:54:40.476120 223679 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0221 08:54:40.476145 223679 kic_runner.go:114] Args: [docker exec --privileged calico-20220221084934-6550 chown docker:docker /home/docker/.ssh/authorized_keys] I0221 08:54:40.568825 223679 cli_runner.go:133] Run: docker container inspect calico-20220221084934-6550 --format={{.State.Status}} I0221 08:54:40.605419 223679 machine.go:88] provisioning docker machine ... I0221 08:54:40.605466 223679 ubuntu.go:169] provisioning hostname "calico-20220221084934-6550" I0221 08:54:40.605522 223679 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220221084934-6550 I0221 08:54:40.645726 223679 main.go:130] libmachine: Using SSH client type: native I0221 08:54:40.645994 223679 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49364 } I0221 08:54:40.646023 223679 main.go:130] libmachine: About to run SSH command: sudo hostname calico-20220221084934-6550 && echo "calico-20220221084934-6550" | sudo tee /etc/hostname I0221 08:54:40.780620 223679 main.go:130] libmachine: SSH cmd err, output: : calico-20220221084934-6550 I0221 08:54:40.780691 223679 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220221084934-6550 I0221 08:54:40.814209 223679 main.go:130] libmachine: Using SSH client type: native I0221 08:54:40.814413 223679 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49364 } I0221 08:54:40.814449 223679 main.go:130] libmachine: About to run SSH command: if ! grep -xq '.*\scalico-20220221084934-6550' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20220221084934-6550/g' /etc/hosts; else echo '127.0.1.1 calico-20220221084934-6550' | sudo tee -a /etc/hosts; fi fi I0221 08:54:40.938947 223679 main.go:130] libmachine: SSH cmd err, output: : I0221 08:54:40.938980 223679 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube} I0221 08:54:40.939035 223679 ubuntu.go:177] setting up certificates I0221 08:54:40.939046 223679 provision.go:83] configureAuth start I0221 08:54:40.939089 223679 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220221084934-6550 I0221 08:54:40.975796 223679 provision.go:138] copyHostCerts I0221 08:54:40.975850 223679 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem, removing ... I0221 08:54:40.975857 223679 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem I0221 08:54:40.975903 223679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem (1082 bytes) I0221 08:54:40.975970 223679 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem, removing ... I0221 08:54:40.975988 223679 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem I0221 08:54:40.976005 223679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem (1123 bytes) I0221 08:54:40.976063 223679 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem, removing ... I0221 08:54:40.976102 223679 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem I0221 08:54:40.976121 223679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem (1675 bytes) I0221 08:54:40.976166 223679 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem org=jenkins.calico-20220221084934-6550 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20220221084934-6550] I0221 08:54:41.313676 223679 provision.go:172] copyRemoteCerts I0221 08:54:41.313739 223679 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0221 08:54:41.313767 223679 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220221084934-6550 I0221 08:54:41.349452 223679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49364 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/calico-20220221084934-6550/id_rsa Username:docker} I0221 08:54:41.438412 223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes) I0221 08:54:41.457832 223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes) I0221 08:54:41.476216 223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I0221 08:54:41.495583 223679 provision.go:86] duration metric: configureAuth took 556.525196ms I0221 08:54:41.495616 223679 ubuntu.go:193] setting minikube options for container-runtime I0221 08:54:41.495815 223679 config.go:176] Loaded profile config "calico-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 08:54:41.495870 223679 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220221084934-6550 I0221 08:54:41.533059 223679 main.go:130] libmachine: Using SSH client type: native I0221 08:54:41.533198 223679 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49364 } I0221 08:54:41.533213 223679 main.go:130] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0221 08:54:41.655048 223679 main.go:130] libmachine: SSH cmd err, output: : overlay I0221 08:54:41.655077 223679 ubuntu.go:71] root file system type: overlay I0221 08:54:41.655267 223679 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0221 08:54:41.655327 223679 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220221084934-6550 I0221 08:54:41.689366 223679 main.go:130] libmachine: Using SSH client type: native I0221 08:54:41.689505 223679 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49364 } I0221 08:54:41.689565 223679 main.go:130] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %s "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0221 08:54:41.822029 223679 main.go:130] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0221 08:54:41.822112 223679 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220221084934-6550 I0221 08:54:41.859291 223679 main.go:130] libmachine: Using SSH client type: native I0221 08:54:41.859435 223679 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49364 } I0221 08:54:41.859452 223679 main.go:130] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0221 08:54:42.534877 223679 main.go:130] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2021-12-13 11:43:42.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2022-02-21 08:54:41.817826590 +0000 @@ -1,30 +1,32 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com +BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target -Requires=docker.socket containerd.service +Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always - -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 +Restart=on-failure -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +34,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0221 08:54:42.534914 223679 machine.go:91] provisioned docker machine in 1.929466074s I0221 08:54:42.534924 223679 client.go:171] LocalClient.Create took 10.53016081s I0221 08:54:42.534936 223679 start.go:168] duration metric: libmachine.API.Create for "calico-20220221084934-6550" took 10.530218344s I0221 08:54:42.534945 223679 start.go:267] post-start starting for "calico-20220221084934-6550" (driver="docker") I0221 08:54:42.534950 223679 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0221 08:54:42.535085 223679 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0221 08:54:42.535124 223679 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220221084934-6550 I0221 08:54:42.570227 223679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49364 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/calico-20220221084934-6550/id_rsa Username:docker} I0221 08:54:42.659420 223679 ssh_runner.go:195] Run: cat /etc/os-release I0221 08:54:42.662549 223679 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0221 08:54:42.662589 223679 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0221 08:54:42.662602 223679 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0221 08:54:42.662610 223679 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0221 08:54:42.662627 223679 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/addons for local assets ... I0221 08:54:42.662691 223679 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files for local assets ... I0221 08:54:42.662786 223679 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem -> 65502.pem in /etc/ssl/certs I0221 08:54:42.662899 223679 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs I0221 08:54:42.670331 223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /etc/ssl/certs/65502.pem (1708 bytes) I0221 08:54:42.689477 223679 start.go:270] post-start completed in 154.520884ms I0221 08:54:42.689843 223679 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220221084934-6550 I0221 08:54:42.730023 223679 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/config.json ... I0221 08:54:42.730315 223679 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0221 08:54:42.730369 223679 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220221084934-6550 I0221 08:54:42.767727 223679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49364 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/calico-20220221084934-6550/id_rsa Username:docker} I0221 08:54:42.851528 223679 start.go:129] duration metric: createHost completed in 10.849499789s I0221 08:54:42.851567 223679 start.go:80] releasing machines lock for "calico-20220221084934-6550", held for 10.849686754s I0221 08:54:42.851656 223679 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220221084934-6550 I0221 08:54:42.893166 223679 ssh_runner.go:195] Run: systemctl --version I0221 08:54:42.893224 223679 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/ I0221 08:54:42.893229 223679 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220221084934-6550 I0221 08:54:42.893280 223679 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220221084934-6550 I0221 08:54:42.935097 223679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49364 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/calico-20220221084934-6550/id_rsa Username:docker} I0221 08:54:42.939437 223679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49364 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/calico-20220221084934-6550/id_rsa Username:docker} I0221 08:54:43.165553 223679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0221 08:54:43.176428 223679 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 08:54:43.186305 223679 cruntime.go:272] skipping containerd shutdown because we are bound to it I0221 08:54:43.186358 223679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0221 08:54:43.196307 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0221 08:54:43.209884 223679 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0221 08:54:43.297602 223679 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0221 08:54:43.367679 223679 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 08:54:43.377417 223679 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0221 08:54:43.457703 223679 ssh_runner.go:195] Run: sudo systemctl start docker I0221 08:54:43.467810 223679 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 08:54:43.509287 223679 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 08:54:43.551952 223679 out.go:203] * Preparing Kubernetes v1.23.4 on Docker 20.10.12 ... I0221 08:54:43.552042 223679 cli_runner.go:133] Run: docker network inspect calico-20220221084934-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 08:54:43.590101 223679 ssh_runner.go:195] Run: grep 192.168.67.1 host.minikube.internal$ /etc/hosts I0221 08:54:43.593455 223679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 08:54:43.604974 223679 out.go:176] - kubelet.housekeeping-interval=5m I0221 08:54:43.605063 223679 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 08:54:43.605146 223679 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0221 08:54:43.639090 223679 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.4 k8s.gcr.io/kube-proxy:v1.23.4 k8s.gcr.io/kube-controller-manager:v1.23.4 k8s.gcr.io/kube-scheduler:v1.23.4 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0221 08:54:43.639119 223679 docker.go:537] Images already preloaded, skipping extraction I0221 08:54:43.639171 223679 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0221 08:54:43.676921 223679 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.4 k8s.gcr.io/kube-proxy:v1.23.4 k8s.gcr.io/kube-controller-manager:v1.23.4 k8s.gcr.io/kube-scheduler:v1.23.4 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0221 08:54:43.676951 223679 cache_images.go:84] Images are preloaded, skipping loading I0221 08:54:43.677005 223679 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0221 08:54:43.775624 223679 cni.go:93] Creating CNI manager for "calico" I0221 08:54:43.775650 223679 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0221 08:54:43.775662 223679 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20220221084934-6550 NodeName:calico-20220221084934-6550 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0221 08:54:43.775783 223679 kubeadm.go:162] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.67.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "calico-20220221084934-6550" kubeletExtraArgs: node-ip: 192.168.67.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.67.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.23.4 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%" nodefs.inodesFree: "0%" imagefs.available: "0%" failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0221 08:54:43.775860 223679 kubeadm.go:936] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.23.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=calico-20220221084934-6550 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 [Install] config: {KubernetesVersion:v1.23.4 ClusterName:calico-20220221084934-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} I0221 08:54:43.775903 223679 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.4 I0221 08:54:43.783049 223679 binaries.go:44] Found k8s binaries, skipping transfer I0221 08:54:43.783112 223679 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0221 08:54:43.790080 223679 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (400 bytes) I0221 08:54:43.803657 223679 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0221 08:54:43.817305 223679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes) I0221 08:54:43.832073 223679 ssh_runner.go:195] Run: grep 192.168.67.2 control-plane.minikube.internal$ /etc/hosts I0221 08:54:43.835308 223679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 08:54:43.845202 223679 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550 for IP: 192.168.67.2 I0221 08:54:43.845320 223679 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key I0221 08:54:43.845374 223679 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key I0221 08:54:43.845436 223679 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/client.key I0221 08:54:43.845456 223679 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/client.crt with IP's: [] I0221 08:54:44.006432 223679 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/client.crt ... I0221 08:54:44.006474 223679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/client.crt: {Name:mk855fbba0271a5174ba2c17a62536f5fc002b45 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:54:44.006707 223679 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/client.key ... I0221 08:54:44.006730 223679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/client.key: {Name:mk6b07f68ad6023650adafd135358280d1825bb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:54:44.006871 223679 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/apiserver.key.c7fa3a9e I0221 08:54:44.006897 223679 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1] I0221 08:54:44.294014 223679 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/apiserver.crt.c7fa3a9e ... I0221 08:54:44.294052 223679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/apiserver.crt.c7fa3a9e: {Name:mkb18de625bf9d4b1da4d8c0e20b7c74d4689d72 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:54:44.294290 223679 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/apiserver.key.c7fa3a9e ... I0221 08:54:44.294313 223679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/apiserver.key.c7fa3a9e: {Name:mk342d0f120f3782db5aaad19a32574ae0c04f8d Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:54:44.294434 223679 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/apiserver.crt I0221 08:54:44.294491 223679 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/apiserver.key I0221 08:54:44.294537 223679 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/proxy-client.key I0221 08:54:44.294551 223679 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/proxy-client.crt with IP's: [] I0221 08:54:44.518976 223679 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/proxy-client.crt ... I0221 08:54:44.519036 223679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/proxy-client.crt: {Name:mk6f6f43267f4534ff28d48ba090d2600cf0e9de Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:54:44.519265 223679 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/proxy-client.key ... I0221 08:54:44.519291 223679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/proxy-client.key: {Name:mk80acd65e2e1b5036bf09d5fa5ec12f9e2086fa Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:54:44.519541 223679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem (1338 bytes) W0221 08:54:44.519593 223679 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550_empty.pem, impossibly tiny 0 bytes I0221 08:54:44.519633 223679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem (1679 bytes) I0221 08:54:44.519678 223679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem (1082 bytes) I0221 08:54:44.519730 223679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem (1123 bytes) I0221 08:54:44.519770 223679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem (1675 bytes) I0221 08:54:44.519828 223679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem (1708 bytes) I0221 08:54:44.521210 223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0221 08:54:44.558411 223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0221 08:54:44.579347 223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0221 08:54:44.604843 223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I0221 08:54:44.627275 223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0221 08:54:44.648374 223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0221 08:54:44.669879 223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0221 08:54:44.689847 223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0221 08:54:44.709519 223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /usr/share/ca-certificates/65502.pem (1708 bytes) I0221 08:54:44.733150 223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0221 08:54:44.756964 223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem --> /usr/share/ca-certificates/6550.pem (1338 bytes) I0221 08:54:44.778521 223679 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0221 08:54:44.793575 223679 ssh_runner.go:195] Run: openssl version I0221 08:54:44.798665 223679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65502.pem && ln -fs /usr/share/ca-certificates/65502.pem /etc/ssl/certs/65502.pem" I0221 08:54:44.808787 223679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65502.pem I0221 08:54:44.812470 223679 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Feb 21 08:30 /usr/share/ca-certificates/65502.pem I0221 08:54:44.812527 223679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65502.pem I0221 08:54:44.817903 223679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65502.pem /etc/ssl/certs/3ec20f2e.0" I0221 08:54:44.827601 223679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0221 08:54:44.865122 223679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0221 08:54:44.891782 223679 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb 21 08:26 /usr/share/ca-certificates/minikubeCA.pem I0221 08:54:44.891866 223679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0221 08:54:44.899116 223679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0221 08:54:44.909368 223679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6550.pem && ln -fs /usr/share/ca-certificates/6550.pem /etc/ssl/certs/6550.pem" I0221 08:54:44.920591 223679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6550.pem I0221 08:54:44.925480 223679 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Feb 21 08:30 /usr/share/ca-certificates/6550.pem I0221 08:54:44.925592 223679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6550.pem I0221 08:54:44.932674 223679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6550.pem /etc/ssl/certs/51391683.0" I0221 08:54:44.947547 223679 kubeadm.go:391] StartCluster: {Name:calico-20220221084934-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:calico-20220221084934-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 08:54:44.947712 223679 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0221 08:54:44.991618 223679 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0221 08:54:44.998885 223679 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0221 08:54:45.015354 223679 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver I0221 08:54:45.015414 223679 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0221 08:54:45.028145 223679 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0221 08:54:45.028193 223679 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0221 08:54:45.659427 223679 out.go:203] - Generating certificates and keys ... I0221 08:54:48.200933 223679 out.go:203] - Booting up control plane ... I0221 08:55:02.748988 223679 out.go:203] - Configuring RBAC rules ... I0221 08:55:03.208968 223679 cni.go:93] Creating CNI manager for "calico" I0221 08:55:03.211365 223679 out.go:176] * Configuring Calico (Container Networking Interface) ... I0221 08:55:03.211657 223679 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.4/kubectl ... I0221 08:55:03.211681 223679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202049 bytes) I0221 08:55:03.227608 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml I0221 08:55:04.757338 223679 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.529692552s) I0221 08:55:04.757387 223679 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0221 08:55:04.757470 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:04.757473 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=9a4c39b14752448ee8b5da7f8bb397c2a16d9ea9 minikube.k8s.io/name=calico-20220221084934-6550 minikube.k8s.io/updated_at=2022_02_21T08_55_04_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:04.850953 223679 ops.go:34] apiserver oom_adj: -16 I0221 08:55:04.851063 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:05.440068 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:05.940254 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:06.440215 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:06.940222 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:07.440213 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:07.939923 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:08.439546 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:08.940223 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:09.440124 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:09.939702 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:10.439575 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:10.940202 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:11.439703 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:11.939963 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:12.439836 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:12.939553 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:13.439654 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:13.497568 223679 kubeadm.go:1020] duration metric: took 8.740153817s to wait for elevateKubeSystemPrivileges. I0221 08:55:13.497601 223679 kubeadm.go:393] StartCluster complete in 28.550066987s I0221 08:55:13.497616 223679 settings.go:142] acquiring lock: {Name:mk4400923ef35d7d80e21aa000bc7683aef0fb09 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:55:13.497683 223679 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 08:55:13.498747 223679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig: {Name:mkbaf7b3f4ffbe8a9d57707d423380523fa909e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:55:14.022464 223679 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20220221084934-6550" rescaled to 1 I0221 08:55:14.022509 223679 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true} I0221 08:55:14.024435 223679 out.go:176] * Verifying Kubernetes components... I0221 08:55:14.024485 223679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0221 08:55:14.022561 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0221 08:55:14.022577 223679 addons.go:415] enableAddons start: toEnable=map[], additional=[] I0221 08:55:14.024575 223679 addons.go:65] Setting storage-provisioner=true in profile "calico-20220221084934-6550" I0221 08:55:14.022730 223679 config.go:176] Loaded profile config "calico-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 08:55:14.024592 223679 addons.go:65] Setting default-storageclass=true in profile "calico-20220221084934-6550" I0221 08:55:14.024599 223679 addons.go:153] Setting addon storage-provisioner=true in "calico-20220221084934-6550" I0221 08:55:14.024606 223679 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20220221084934-6550" W0221 08:55:14.024612 223679 addons.go:165] addon storage-provisioner should already be in state true I0221 08:55:14.024642 223679 host.go:66] Checking if "calico-20220221084934-6550" exists ... I0221 08:55:14.024913 223679 cli_runner.go:133] Run: docker container inspect calico-20220221084934-6550 --format={{.State.Status}} I0221 08:55:14.025104 223679 cli_runner.go:133] Run: docker container inspect calico-20220221084934-6550 --format={{.State.Status}} I0221 08:55:14.038203 223679 node_ready.go:35] waiting up to 5m0s for node "calico-20220221084934-6550" to be "Ready" ... I0221 08:55:14.042490 223679 node_ready.go:49] node "calico-20220221084934-6550" has status "Ready":"True" I0221 08:55:14.042526 223679 node_ready.go:38] duration metric: took 4.281504ms waiting for node "calico-20220221084934-6550" to be "Ready" ... I0221 08:55:14.042537 223679 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 08:55:14.064216 223679 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-zcdj6" in "kube-system" namespace to be "Ready" ... I0221 08:55:14.068536 223679 out.go:176] - Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0221 08:55:14.068650 223679 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml I0221 08:55:14.068667 223679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0221 08:55:14.068718 223679 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220221084934-6550 I0221 08:55:14.071204 223679 addons.go:153] Setting addon default-storageclass=true in "calico-20220221084934-6550" W0221 08:55:14.071226 223679 addons.go:165] addon default-storageclass should already be in state true I0221 08:55:14.071248 223679 host.go:66] Checking if "calico-20220221084934-6550" exists ... I0221 08:55:14.071675 223679 cli_runner.go:133] Run: docker container inspect calico-20220221084934-6550 --format={{.State.Status}} I0221 08:55:14.095438 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.67.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0221 08:55:14.121614 223679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49364 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/calico-20220221084934-6550/id_rsa Username:docker} I0221 08:55:14.130797 223679 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml I0221 08:55:14.130824 223679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0221 08:55:14.130878 223679 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220221084934-6550 I0221 08:55:14.166553 223679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49364 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/calico-20220221084934-6550/id_rsa Username:docker} I0221 08:55:14.505375 223679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0221 08:55:14.506353 223679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0221 08:55:16.015822 223679 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.67.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.92034198s) I0221 08:55:16.015851 223679 start.go:777] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS I0221 08:55:16.020294 223679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.514878245s) I0221 08:55:16.106779 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:16.116155 223679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.609765842s) I0221 08:55:16.117844 223679 out.go:176] * Enabled addons: default-storageclass, storage-provisioner I0221 08:55:16.117871 223679 addons.go:417] enableAddons completed in 2.095295955s I0221 08:55:18.608129 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:21.084145 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:23.583507 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:26.082513 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:28.584036 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:30.607366 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:32.608422 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:34.608830 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:37.082853 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:39.082914 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:41.084278 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:43.583801 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:46.104227 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:48.608316 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:51.082452 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:53.082812 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:55.604982 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:58.083480 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:00.107900 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:02.108600 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:04.109005 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:06.608183 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:09.083257 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:11.584369 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:13.603328 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:15.607461 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:17.608185 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:20.103368 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:22.106959 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:24.109509 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:26.606973 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:28.607609 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:31.082276 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:33.107320 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:35.583226 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:38.107435 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:40.606736 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:43.082434 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:45.107171 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:47.583447 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:49.608204 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:51.608560 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:54.108380 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:56.583351 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:59.083417 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:01.108902 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:03.608727 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:06.083201 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:08.606947 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:11.085043 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:13.606594 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:16.104269 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:18.582815 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:20.585066 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:23.083375 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:25.108449 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:27.607457 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:29.607786 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:32.085234 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:34.109374 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:36.583295 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:39.105966 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:41.606692 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:44.106976 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:46.583983 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:49.084072 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:51.112230 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:53.606853 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:55.607543 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:58.108377 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:00.608452 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:03.082697 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:05.107411 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:07.583427 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:10.086403 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:12.582090 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:14.607319 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:17.083915 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:19.607890 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:22.082238 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:24.107976 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:26.608511 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:29.107566 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:31.108790 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:33.582823 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:35.586175 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:37.607126 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:40.082258 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:42.108072 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:44.607510 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:46.608936 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:48.609972 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:51.082477 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:53.105968 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:55.582165 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:57.606112 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:59.608167 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:59:02.106572 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:59:04.107313 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:59:06.108123 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:59:08.108992 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:59:10.582664 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:59:12.583673 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:59:14.112706 223679 pod_ready.go:81] duration metric: took 4m0.048450561s waiting for pod "calico-node-zcdj6" in "kube-system" namespace to be "Ready" ... E0221 08:59:14.112734 223679 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition I0221 08:59:14.112746 223679 pod_ready.go:78] waiting up to 5m0s for pod "etcd-calico-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:14.117793 223679 pod_ready.go:92] pod "etcd-calico-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 08:59:14.117820 223679 pod_ready.go:81] duration metric: took 5.066157ms waiting for pod "etcd-calico-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:14.117832 223679 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-calico-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:14.122627 223679 pod_ready.go:92] pod "kube-apiserver-calico-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 08:59:14.122647 223679 pod_ready.go:81] duration metric: took 4.807147ms waiting for pod "kube-apiserver-calico-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:14.122656 223679 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-calico-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:14.127594 223679 pod_ready.go:92] pod "kube-controller-manager-calico-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 08:59:14.127616 223679 pod_ready.go:81] duration metric: took 4.954276ms waiting for pod "kube-controller-manager-calico-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:14.127627 223679 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-kwcvx" in "kube-system" namespace to be "Ready" ... I0221 08:59:14.480801 223679 pod_ready.go:92] pod "kube-proxy-kwcvx" in "kube-system" namespace has status "Ready":"True" I0221 08:59:14.480829 223679 pod_ready.go:81] duration metric: took 353.19554ms waiting for pod "kube-proxy-kwcvx" in "kube-system" namespace to be "Ready" ... I0221 08:59:14.480842 223679 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-calico-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:14.879906 223679 pod_ready.go:92] pod "kube-scheduler-calico-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 08:59:14.879927 223679 pod_ready.go:81] duration metric: took 399.077104ms waiting for pod "kube-scheduler-calico-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:14.879937 223679 pod_ready.go:38] duration metric: took 4m0.837387313s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 08:59:14.879961 223679 api_server.go:51] waiting for apiserver process to appear ... I0221 08:59:14.880012 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0221 08:59:14.942433 223679 logs.go:274] 1 containers: [5b808a7ef4a2] I0221 08:59:14.942510 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0221 08:59:15.037787 223679 logs.go:274] 1 containers: [96cc9489b33e] I0221 08:59:15.037848 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0221 08:59:15.134487 223679 logs.go:274] 0 containers: [] W0221 08:59:15.134520 223679 logs.go:276] No container was found matching "coredns" I0221 08:59:15.134573 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0221 08:59:15.229656 223679 logs.go:274] 1 containers: [f012d1d45e22] I0221 08:59:15.229733 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0221 08:59:15.320906 223679 logs.go:274] 1 containers: [449cc37a92fe] I0221 08:59:15.320985 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0221 08:59:15.417453 223679 logs.go:274] 0 containers: [] W0221 08:59:15.417481 223679 logs.go:276] No container was found matching "kubernetes-dashboard" I0221 08:59:15.417528 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0221 08:59:15.513893 223679 logs.go:274] 2 containers: [528acfa448ce f6cf402c0c9d] I0221 08:59:15.513990 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0221 08:59:15.550415 223679 logs.go:274] 1 containers: [cddc9ef001f2] I0221 08:59:15.550454 223679 logs.go:123] Gathering logs for dmesg ... I0221 08:59:15.550465 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0221 08:59:15.576242 223679 logs.go:123] Gathering logs for etcd [96cc9489b33e] ... I0221 08:59:15.576295 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc9489b33e" I0221 08:59:15.618102 223679 logs.go:123] Gathering logs for kube-proxy [449cc37a92fe] ... I0221 08:59:15.618136 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 449cc37a92fe" I0221 08:59:15.656954 223679 logs.go:123] Gathering logs for storage-provisioner [528acfa448ce] ... I0221 08:59:15.656987 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 528acfa448ce" I0221 08:59:15.722111 223679 logs.go:123] Gathering logs for storage-provisioner [f6cf402c0c9d] ... I0221 08:59:15.722147 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6cf402c0c9d" I0221 08:59:15.808702 223679 logs.go:123] Gathering logs for Docker ... I0221 08:59:15.808737 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0221 08:59:15.889269 223679 logs.go:123] Gathering logs for container status ... I0221 08:59:15.889312 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0221 08:59:15.945538 223679 logs.go:123] Gathering logs for kubelet ... I0221 08:59:15.945571 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0221 08:59:16.147141 223679 logs.go:123] Gathering logs for describe nodes ... I0221 08:59:16.147186 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0221 08:59:16.338070 223679 logs.go:123] Gathering logs for kube-apiserver [5b808a7ef4a2] ... I0221 08:59:16.338111 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b808a7ef4a2" I0221 08:59:16.431605 223679 logs.go:123] Gathering logs for kube-scheduler [f012d1d45e22] ... I0221 08:59:16.431645 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f012d1d45e22" I0221 08:59:16.530228 223679 logs.go:123] Gathering logs for kube-controller-manager [cddc9ef001f2] ... I0221 08:59:16.530264 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cddc9ef001f2" I0221 08:59:19.103148 223679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0221 08:59:19.129062 223679 api_server.go:71] duration metric: took 4m5.106529752s to wait for apiserver process to appear ... I0221 08:59:19.129100 223679 api_server.go:87] waiting for apiserver healthz status ... I0221 08:59:19.129165 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0221 08:59:19.224393 223679 logs.go:274] 1 containers: [5b808a7ef4a2] I0221 08:59:19.224460 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0221 08:59:19.319828 223679 logs.go:274] 1 containers: [96cc9489b33e] I0221 08:59:19.319900 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0221 08:59:19.418463 223679 logs.go:274] 0 containers: [] W0221 08:59:19.418495 223679 logs.go:276] No container was found matching "coredns" I0221 08:59:19.418541 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0221 08:59:19.516431 223679 logs.go:274] 1 containers: [f012d1d45e22] I0221 08:59:19.516522 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0221 08:59:19.607457 223679 logs.go:274] 1 containers: [449cc37a92fe] I0221 08:59:19.607543 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0221 08:59:19.644308 223679 logs.go:274] 0 containers: [] W0221 08:59:19.644330 223679 logs.go:276] No container was found matching "kubernetes-dashboard" I0221 08:59:19.644368 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0221 08:59:19.677987 223679 logs.go:274] 1 containers: [528acfa448ce] I0221 08:59:19.678065 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0221 08:59:19.711573 223679 logs.go:274] 1 containers: [cddc9ef001f2] I0221 08:59:19.711614 223679 logs.go:123] Gathering logs for dmesg ... I0221 08:59:19.711634 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0221 08:59:19.739316 223679 logs.go:123] Gathering logs for describe nodes ... I0221 08:59:19.739352 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0221 08:59:19.829642 223679 logs.go:123] Gathering logs for etcd [96cc9489b33e] ... I0221 08:59:19.829686 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc9489b33e" I0221 08:59:19.928327 223679 logs.go:123] Gathering logs for kube-scheduler [f012d1d45e22] ... I0221 08:59:19.928367 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f012d1d45e22" I0221 08:59:20.030039 223679 logs.go:123] Gathering logs for storage-provisioner [528acfa448ce] ... I0221 08:59:20.030084 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 528acfa448ce" I0221 08:59:20.115493 223679 logs.go:123] Gathering logs for kubelet ... I0221 08:59:20.115539 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0221 08:59:20.289828 223679 logs.go:123] Gathering logs for kube-proxy [449cc37a92fe] ... I0221 08:59:20.289874 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 449cc37a92fe" I0221 08:59:20.351337 223679 logs.go:123] Gathering logs for kube-controller-manager [cddc9ef001f2] ... I0221 08:59:20.351388 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cddc9ef001f2" I0221 08:59:20.480018 223679 logs.go:123] Gathering logs for Docker ... I0221 08:59:20.480056 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0221 08:59:20.594320 223679 logs.go:123] Gathering logs for container status ... I0221 08:59:20.594358 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0221 08:59:20.641023 223679 logs.go:123] Gathering logs for kube-apiserver [5b808a7ef4a2] ... I0221 08:59:20.641062 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b808a7ef4a2" I0221 08:59:23.238237 223679 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ... I0221 08:59:23.244347 223679 api_server.go:266] https://192.168.67.2:8443/healthz returned 200: ok I0221 08:59:23.246494 223679 api_server.go:140] control plane version: v1.23.4 I0221 08:59:23.246519 223679 api_server.go:130] duration metric: took 4.1174116s to wait for apiserver health ... I0221 08:59:23.246529 223679 system_pods.go:43] waiting for kube-system pods to appear ... I0221 08:59:23.246581 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0221 08:59:23.331088 223679 logs.go:274] 1 containers: [5b808a7ef4a2] I0221 08:59:23.331164 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0221 08:59:23.425220 223679 logs.go:274] 1 containers: [96cc9489b33e] I0221 08:59:23.425297 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0221 08:59:23.510198 223679 logs.go:274] 0 containers: [] W0221 08:59:23.510230 223679 logs.go:276] No container was found matching "coredns" I0221 08:59:23.510284 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0221 08:59:23.548794 223679 logs.go:274] 1 containers: [f012d1d45e22] I0221 08:59:23.548859 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0221 08:59:23.642803 223679 logs.go:274] 1 containers: [449cc37a92fe] I0221 08:59:23.642891 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0221 08:59:23.735232 223679 logs.go:274] 0 containers: [] W0221 08:59:23.735263 223679 logs.go:276] No container was found matching "kubernetes-dashboard" I0221 08:59:23.735316 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0221 08:59:23.820175 223679 logs.go:274] 1 containers: [528acfa448ce] I0221 08:59:23.820245 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0221 08:59:23.911162 223679 logs.go:274] 1 containers: [cddc9ef001f2] I0221 08:59:23.911205 223679 logs.go:123] Gathering logs for storage-provisioner [528acfa448ce] ... I0221 08:59:23.911218 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 528acfa448ce" I0221 08:59:24.010277 223679 logs.go:123] Gathering logs for kubelet ... I0221 08:59:24.010307 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0221 08:59:24.188331 223679 logs.go:123] Gathering logs for dmesg ... I0221 08:59:24.188378 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0221 08:59:24.235517 223679 logs.go:123] Gathering logs for describe nodes ... I0221 08:59:24.235564 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0221 08:59:24.433778 223679 logs.go:123] Gathering logs for kube-scheduler [f012d1d45e22] ... I0221 08:59:24.433815 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f012d1d45e22" I0221 08:59:24.542462 223679 logs.go:123] Gathering logs for Docker ... I0221 08:59:24.542562 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0221 08:59:24.683898 223679 logs.go:123] Gathering logs for container status ... I0221 08:59:24.683938 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0221 08:59:24.747804 223679 logs.go:123] Gathering logs for kube-apiserver [5b808a7ef4a2] ... I0221 08:59:24.747846 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b808a7ef4a2" I0221 08:59:24.839623 223679 logs.go:123] Gathering logs for etcd [96cc9489b33e] ... I0221 08:59:24.839664 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc9489b33e" I0221 08:59:24.933214 223679 logs.go:123] Gathering logs for kube-proxy [449cc37a92fe] ... I0221 08:59:24.933249 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 449cc37a92fe" I0221 08:59:24.970081 223679 logs.go:123] Gathering logs for kube-controller-manager [cddc9ef001f2] ... I0221 08:59:24.970115 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cddc9ef001f2" I0221 08:59:27.559651 223679 system_pods.go:59] 9 kube-system pods found I0221 08:59:27.559689 223679 system_pods.go:61] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:27.559697 223679 system_pods.go:61] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:27.559703 223679 system_pods.go:61] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:27.559708 223679 system_pods.go:61] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:27.559713 223679 system_pods.go:61] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:27.559717 223679 system_pods.go:61] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:27.559722 223679 system_pods.go:61] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:27.559726 223679 system_pods.go:61] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:27.559734 223679 system_pods.go:61] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:27.559742 223679 system_pods.go:74] duration metric: took 4.313209437s to wait for pod list to return data ... I0221 08:59:27.559749 223679 default_sa.go:34] waiting for default service account to be created ... I0221 08:59:27.562671 223679 default_sa.go:45] found service account: "default" I0221 08:59:27.562697 223679 default_sa.go:55] duration metric: took 2.939018ms for default service account to be created ... I0221 08:59:27.562709 223679 system_pods.go:116] waiting for k8s-apps to be running ... I0221 08:59:27.606750 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:27.606791 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:27.606820 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:27.606832 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:27.606849 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:27.606856 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:27.606863 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:27.606870 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:27.606880 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:27.606889 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:27.606913 223679 retry.go:31] will retry after 263.082536ms: missing components: kube-dns I0221 08:59:27.875522 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:27.875558 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:27.875569 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:27.875575 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:27.875581 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:27.875586 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:27.875590 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:27.875593 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:27.875598 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:27.875603 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:27.875619 223679 retry.go:31] will retry after 381.329545ms: missing components: kube-dns I0221 08:59:28.262703 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:28.262737 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:28.262745 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:28.262752 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:28.262757 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:28.262764 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:28.262770 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:28.262776 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:28.262782 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:28.262789 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:28.262812 223679 retry.go:31] will retry after 422.765636ms: missing components: kube-dns I0221 08:59:28.708387 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:28.708425 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:28.708467 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:28.708488 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:28.708506 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:28.708519 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:28.708531 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:28.708537 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:28.708544 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:28.708559 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:28.708575 223679 retry.go:31] will retry after 473.074753ms: missing components: kube-dns I0221 08:59:29.187326 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:29.187359 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:29.187367 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:29.187374 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:29.187379 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:29.187384 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:29.187388 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:29.187392 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:29.187396 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:29.187401 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:29.187414 223679 retry.go:31] will retry after 587.352751ms: missing components: kube-dns I0221 08:59:29.807999 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:29.808041 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:29.808052 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:29.808062 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:29.808069 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:29.808077 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:29.808087 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:29.808093 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:29.808103 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:29.808113 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:29.808133 223679 retry.go:31] will retry after 834.206799ms: missing components: kube-dns I0221 08:59:30.649684 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:30.649731 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:30.649746 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:30.649756 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:30.649766 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:30.649778 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:30.649792 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:30.649806 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:30.649817 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:30.649831 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:30.649852 223679 retry.go:31] will retry after 746.553905ms: missing components: kube-dns I0221 08:59:31.403363 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:31.403414 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:31.403426 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:31.403438 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:31.403446 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:31.403455 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:31.403466 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:31.403474 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:31.403488 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:31.403498 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:31.403522 223679 retry.go:31] will retry after 987.362415ms: missing components: kube-dns I0221 08:59:32.397015 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:32.397055 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:32.397064 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:32.397075 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:32.397083 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:32.397090 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:32.397103 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:32.397110 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:32.397121 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:32.397132 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:32.397148 223679 retry.go:31] will retry after 1.189835008s: missing components: kube-dns I0221 08:59:33.607429 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:33.607467 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:33.607475 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:33.607484 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:33.607493 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:33.607500 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:33.607507 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:33.607531 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:33.607541 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:33.607550 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:33.607570 223679 retry.go:31] will retry after 1.677229867s: missing components: kube-dns I0221 08:59:35.291721 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:35.291757 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:35.291767 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:35.291776 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:35.291783 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:35.291792 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:35.291798 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:35.291809 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:35.291815 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:35.291826 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:35.291840 223679 retry.go:31] will retry after 2.346016261s: missing components: kube-dns I0221 08:59:37.644075 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:37.644109 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:37.644117 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:37.644124 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:37.644131 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:37.644136 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:37.644140 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:37.644144 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:37.644147 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:37.644153 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:37.644169 223679 retry.go:31] will retry after 3.36678925s: missing components: kube-dns I0221 08:59:41.020218 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:41.020262 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:41.020274 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:41.020284 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:41.020290 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:41.020296 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:41.020301 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:41.020307 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:41.020324 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:41.020332 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:41.020346 223679 retry.go:31] will retry after 3.11822781s: missing components: kube-dns I0221 08:59:44.146493 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:44.146526 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:44.146534 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:44.146544 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:44.146552 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:44.146563 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:44.146570 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:44.146582 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:44.146593 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:44.146603 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:44.146623 223679 retry.go:31] will retry after 4.276119362s: missing components: kube-dns I0221 08:59:48.430784 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:48.430822 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:48.430855 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:48.430867 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:48.430880 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:48.430889 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:48.430901 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:48.430911 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:48.430921 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:48.430931 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:48.431005 223679 retry.go:31] will retry after 5.167232101s: missing components: kube-dns I0221 08:59:53.607863 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:53.607910 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:53.607925 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:53.607936 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:53.607950 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:53.607957 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:53.607965 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:53.607971 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:53.607979 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:53.607991 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:53.608009 223679 retry.go:31] will retry after 6.994901864s: missing components: kube-dns I0221 09:00:00.608725 223679 system_pods.go:86] 9 kube-system pods found I0221 09:00:00.608757 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 09:00:00.608767 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 09:00:00.608774 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:00:00.608778 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 09:00:00.608783 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 09:00:00.608788 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 09:00:00.608791 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 09:00:00.608796 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 09:00:00.608801 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:00:00.608818 223679 retry.go:31] will retry after 7.91826225s: missing components: kube-dns I0221 09:00:08.534545 223679 system_pods.go:86] 9 kube-system pods found I0221 09:00:08.534589 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 09:00:08.534602 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 09:00:08.534613 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:00:08.534621 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 09:00:08.534630 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 09:00:08.534642 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 09:00:08.534654 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 09:00:08.534665 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 09:00:08.534678 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:00:08.534700 223679 retry.go:31] will retry after 9.953714808s: missing components: kube-dns I0221 09:00:18.494832 223679 system_pods.go:86] 9 kube-system pods found I0221 09:00:18.494873 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 09:00:18.494884 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 09:00:18.494893 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:00:18.494898 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 09:00:18.494903 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 09:00:18.494909 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 09:00:18.494918 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 09:00:18.494925 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 09:00:18.494935 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:00:18.494956 223679 retry.go:31] will retry after 15.120437328s: missing components: kube-dns I0221 09:00:33.622907 223679 system_pods.go:86] 9 kube-system pods found I0221 09:00:33.622950 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 09:00:33.622961 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 09:00:33.622970 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:00:33.622977 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 09:00:33.622983 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 09:00:33.622989 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 09:00:33.623036 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 09:00:33.623050 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 09:00:33.623058 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:00:33.623079 223679 retry.go:31] will retry after 14.90607158s: missing components: kube-dns I0221 09:00:48.536869 223679 system_pods.go:86] 9 kube-system pods found I0221 09:00:48.536919 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 09:00:48.536931 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 09:00:48.536941 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:00:48.536949 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 09:00:48.536955 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 09:00:48.536959 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 09:00:48.536964 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 09:00:48.536968 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 09:00:48.536982 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running I0221 09:00:48.536998 223679 retry.go:31] will retry after 18.465989061s: missing components: kube-dns I0221 09:01:07.010825 223679 system_pods.go:86] 9 kube-system pods found I0221 09:01:07.010865 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 09:01:07.010877 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 09:01:07.010887 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:01:07.010895 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 09:01:07.010902 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 09:01:07.010908 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 09:01:07.010925 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 09:01:07.010931 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 09:01:07.010939 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running I0221 09:01:07.010960 223679 retry.go:31] will retry after 25.219510332s: missing components: kube-dns I0221 09:01:32.236004 223679 system_pods.go:86] 9 kube-system pods found I0221 09:01:32.236044 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 09:01:32.236056 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 09:01:32.236064 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:01:32.236072 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 09:01:32.236078 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 09:01:32.236084 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 09:01:32.236091 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 09:01:32.236097 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 09:01:32.236107 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:01:32.236125 223679 retry.go:31] will retry after 35.078569648s: missing components: kube-dns I0221 09:02:07.320903 223679 system_pods.go:86] 9 kube-system pods found I0221 09:02:07.320944 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 09:02:07.320955 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 09:02:07.320961 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:02:07.320967 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 09:02:07.320973 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 09:02:07.320977 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 09:02:07.320981 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 09:02:07.320985 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 09:02:07.320990 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:02:07.321002 223679 retry.go:31] will retry after 50.027701973s: missing components: kube-dns I0221 09:02:57.356331 223679 system_pods.go:86] 9 kube-system pods found I0221 09:02:57.356379 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 09:02:57.356394 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 09:02:57.356411 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:02:57.356420 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 09:02:57.356428 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 09:02:57.356435 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 09:02:57.356448 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 09:02:57.356454 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 09:02:57.356467 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:02:57.356486 223679 retry.go:31] will retry after 47.463338706s: missing components: kube-dns I0221 09:03:44.827562 223679 system_pods.go:86] 9 kube-system pods found I0221 09:03:44.827595 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 09:03:44.827608 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 09:03:44.827618 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:03:44.827630 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 09:03:44.827637 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 09:03:44.827644 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 09:03:44.827654 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 09:03:44.827659 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 09:03:44.827674 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:03:44.830160 223679 out.go:176] W0221 09:03:44.830324 223679 out.go:241] X Exiting due to GUEST_START: wait 5m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns X Exiting due to GUEST_START: wait 5m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns W0221 09:03:44.830341 223679 out.go:241] * * W0221 09:03:44.831471 223679 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮ │ │ │ * If the above advice does not help, please let us know: │ │ https://github.com/kubernetes/minikube/issues/new/choose │ │ │ │ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │ │ │ ╰─────────────────────────────────────────────────────────────────────────────────────────────╯ ╭─────────────────────────────────────────────────────────────────────────────────────────────╮ │ │ │ * If the above advice does not help, please let us know: │ │ https://github.com/kubernetes/minikube/issues/new/choose │ │ │ │ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │ │ │ ╰─────────────────────────────────────────────────────────────────────────────────────────────╯ I0221 09:03:44.832903 223679 out.go:176] ** /stderr ** net_test.go:101: failed start: exit status 80 === CONT TestNetworkPlugins/group/calico net_test.go:198: "calico" test finished in 14m10.880041084s, failed=true net_test.go:199: *** TestNetworkPlugins/group/calico FAILED at 2022-02-21 09:03:44.881321403 +0000 UTC m=+2317.643640979 helpers_test.go:223: -----------------------post-mortem-------------------------------- helpers_test.go:231: ======> post-mortem[TestNetworkPlugins/group/calico]: docker inspect <====== helpers_test.go:232: (dbg) Run: docker inspect calico-20220221084934-6550 helpers_test.go:236: (dbg) docker inspect calico-20220221084934-6550: -- stdout -- [ { "Id": "7ff1dcdb7d3889f4bd8ef1c5dbc904860568f88d539892eb1c405ab65e13be1c", "Created": "2022-02-21T08:54:39.336010404Z", "Path": "/usr/local/bin/entrypoint", "Args": [ "/sbin/init" ], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 224777, "ExitCode": 0, "Error": "", "StartedAt": "2022-02-21T08:54:39.741937439Z", "FinishedAt": "0001-01-01T00:00:00Z" }, "Image": "sha256:1312ccd2422d964b2df363d606d0c016d6acbc1ddf0211c26a74717f2897dc43", "ResolvConfPath": "/var/lib/docker/containers/7ff1dcdb7d3889f4bd8ef1c5dbc904860568f88d539892eb1c405ab65e13be1c/resolv.conf", "HostnamePath": "/var/lib/docker/containers/7ff1dcdb7d3889f4bd8ef1c5dbc904860568f88d539892eb1c405ab65e13be1c/hostname", "HostsPath": "/var/lib/docker/containers/7ff1dcdb7d3889f4bd8ef1c5dbc904860568f88d539892eb1c405ab65e13be1c/hosts", "LogPath": "/var/lib/docker/containers/7ff1dcdb7d3889f4bd8ef1c5dbc904860568f88d539892eb1c405ab65e13be1c/7ff1dcdb7d3889f4bd8ef1c5dbc904860568f88d539892eb1c405ab65e13be1c-json.log", "Name": "/calico-20220221084934-6550", "RestartCount": 0, "Driver": "overlay2", "Platform": "linux", "MountLabel": "", "ProcessLabel": "", "AppArmorProfile": "unconfined", "ExecIDs": null, "HostConfig": { "Binds": [ "/lib/modules:/lib/modules:ro", "calico-20220221084934-6550:/var" ], "ContainerIDFile": "", "LogConfig": { "Type": "json-file", "Config": {} }, "NetworkMode": "calico-20220221084934-6550", "PortBindings": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ] }, "RestartPolicy": { "Name": "no", "MaximumRetryCount": 0 }, "AutoRemove": false, "VolumeDriver": "", "VolumesFrom": null, "CapAdd": null, "CapDrop": null, "CgroupnsMode": "host", "Dns": [], "DnsOptions": [], "DnsSearch": [], "ExtraHosts": null, "GroupAdd": null, "IpcMode": "private", "Cgroup": "", "Links": null, "OomScoreAdj": 0, "PidMode": "", "Privileged": true, "PublishAllPorts": false, "ReadonlyRootfs": false, "SecurityOpt": [ "seccomp=unconfined", "apparmor=unconfined", "label=disable" ], "Tmpfs": { "/run": "", "/tmp": "" }, "UTSMode": "", "UsernsMode": "", "ShmSize": 67108864, "Runtime": "runc", "ConsoleSize": [ 0, 0 ], "Isolation": "", "CpuShares": 0, "Memory": 0, "NanoCpus": 2000000000, "CgroupParent": "", "BlkioWeight": 0, "BlkioWeightDevice": [], "BlkioDeviceReadBps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteIOps": null, "CpuPeriod": 0, "CpuQuota": 0, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": [ { "PathOnHost": "/dev/fuse", "PathInContainer": "/dev/fuse", "CgroupPermissions": "rwm" } ], "DeviceCgroupRules": null, "DeviceRequests": null, "KernelMemory": 0, "KernelMemoryTCP": 0, "MemoryReservation": 0, "MemorySwap": 0, "MemorySwappiness": null, "OomKillDisable": false, "PidsLimit": null, "Ulimits": null, "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0, "MaskedPaths": null, "ReadonlyPaths": null }, "GraphDriver": { "Data": { "LowerDir": "/var/lib/docker/overlay2/da0e34433eaaba2c59c7c66f013d3a1aa4769cf97fe1fa1986b5a6fbfa5f1ec8-init/diff:/var/lib/docker/overlay2/a149cc5f42de5038ba3e5f21bd82a6356aa9456aec11a3690e569db72ca8eaf5/diff:/var/lib/docker/overlay2/fce4ef34d92ff253276822af091ec16578b924486e0e565c7f34b479bdbe1606/diff:/var/lib/docker/overlay2/a38e10ed288ad73a8ce637c6219d5b13318bb72210ce4fb6291dd57d238df611/diff:/var/lib/docker/overlay2/93c18ddce49a4eb2d344b0579d7f58df999d53bb329042f78a5b3f40296f83cd/diff:/var/lib/docker/overlay2/d9ee14262b31fff87924cf913d835b02182e6a8dc3783ecb5f67dbd29faa079c/diff:/var/lib/docker/overlay2/8a0b39fc5ebcb9761ad433102244050fab339d3e027a5a30a7e01d121fe362e4/diff:/var/lib/docker/overlay2/7a6b41deef37b6400fac0c94871f73040d4cae157de2e929b7a0bf96f0e353aa/diff:/var/lib/docker/overlay2/49d913ee587718efcf521a65de0109df5c2c42644f75690beed97a437d623b66/diff:/var/lib/docker/overlay2/89e8fa960313dfb6ece4007576bf5d89e2b0ea68a2bde9de5527647a7df6ff0f/diff:/var/lib/docker/overlay2/0d2632e6ee2f4198f3c7c0d37c31ee03d33864e0131ae85dd3001adecfae53db/diff:/var/lib/docker/overlay2/b28a3b2bbe2e231d00863aaaf6be28d2e5a76197f2be7934f18720a71f2435a8/diff:/var/lib/docker/overlay2/6f88e83ff5bb26bf909248d8df5eacf4d01f72523927fd7433a925c525fbd274/diff:/var/lib/docker/overlay2/c2547d18d2639fa842b0977952fa69cffd7b076e8db357ef0ec62fae9676619b/diff:/var/lib/docker/overlay2/34465ba37bb8eef87906c38cb891a738c54a16dbf47d27fef95cee96f0b62c4f/diff:/var/lib/docker/overlay2/50d3383518bb8e6a01e1cfcae621b8c2238dc54ae94d63606d145633045bc6cd/diff:/var/lib/docker/overlay2/83561f107aed2bf9cb840ff66aceb882d0e39bcced79ef27ce67c17e063ba536/diff:/var/lib/docker/overlay2/bfb7c0c91e0a7649286abe3a6b02df4585a931ffa3a6b1e4846aa1b98eb0d997/diff:/var/lib/docker/overlay2/80589044da151f3d3eed7824d236bc8a19697182abca8610eb34f1c3cae53bf0/diff:/var/lib/docker/overlay2/5a336e956e467ad50830aa4b9ed5b65cb3ef08e6d49a4900acd0440ca3c03c3a/diff:/var/lib/docker/overlay2/f7b3446c0dca80a2053183e360de8f99e3a31e87ad633fcc23f14b2045643596/diff:/var/lib/docker/overlay2/03688a16272f13f172ed8baa181df2a1702e08edbc9c5a4a767265adb2c00b79/diff:/var/lib/docker/overlay2/c5445afaabab58221dbffedde261b3f9948116ccba1344b6f3dd4fdfaf5c98cd/diff:/var/lib/docker/overlay2/704464fd20f23f93d0ad3c3c58103f01fc8a0d76d70bc191f718273e7412bb42/diff:/var/lib/docker/overlay2/a344f9ca1bf1b674ebf560c2a0498913ed90c836e3b4f5b1968022f7f7e54055/diff:/var/lib/docker/overlay2/929f053e6a327a7203e8d8c281f827d02ad922fdabbdfcc5453003daba88cdf4/diff:/var/lib/docker/overlay2/d6a160f0ffe94e5abf99d928a638f6a6220b24c7ba643a438c7557146d240f70/diff:/var/lib/docker/overlay2/c6b58e187a2df10fd55e19e6ad38c20f5ebf5e62617a59b6ce4afe0fc712c0f6/diff:/var/lib/docker/overlay2/fef43deeff878af42cc1f31a1f265b66e642aa166cec02eba5a0c82fb6d70872/diff:/var/lib/docker/overlay2/0d7bbf19182056a05efca89485df54da5fba9903000517fc9e7c2072aa7ba62e/diff:/var/lib/docker/overlay2/5840748964e9ce3890a2fa4fc2e3fb0635426f944b42b3128808a7aca816bf48/diff:/var/lib/docker/overlay2/a94126962e9f346960e3f570b96e0a6e987d255fcba30bf3fd76b227e66275c6/diff:/var/lib/docker/overlay2/211ba88e1557a352838b4896dff8efac70330c1424cff237613c4186994ec037/diff:/var/lib/docker/overlay2/c04720d23411d6693832990e2c79bc529ade609401eaeee8d159f5a6d74d986e/diff:/var/lib/docker/overlay2/70bfe328c89d3a3902a4ba591874fe936425b208ec8e39c2e25d60f21b212300/diff:/var/lib/docker/overlay2/7dcce8bec6dd9561387116c9f724b9101e94e15c0e19c7880ce2b4e96b2921a6/diff:/var/lib/docker/overlay2/81463813805e91be42d9f71b5c63823a80abaaa42d8638034eaddcd66c148310/diff:/var/lib/docker/overlay2/c7dbb1344cf7167b07d58d49902e0db9335bc92a71a324c15dadc14c0aa67c45/diff:/var/lib/docker/overlay2/909df0d6b92b92bcf6e376f0e33d120c53cd24ad0b997d31b5a84f3d1ab89769/diff:/var/lib/docker/overlay2/eacfeaee6ed4aa14e847adad2c6d03439c53944654cc2779cc5c80d239ab323c/diff:/var/lib/docker/overlay2/48c58492169f9043495b11b47439f4dcdeb168525e5bb3a8c130c1ddd5ec882d/diff:/var/lib/docker/overlay2/4520a69f91e766b7694473c2107cc0644394d65da900259eb5933779accdc86d/diff:/var/lib/docker/overlay2/68aafdd300c4b4ebf94976a352959e75fa170653a69e13da38650a447cd5cb2f/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394cfc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff", "MergedDir": "/var/lib/docker/overlay2/da0e34433eaaba2c59c7c66f013d3a1aa4769cf97fe1fa1986b5a6fbfa5f1ec8/merged", "UpperDir": "/var/lib/docker/overlay2/da0e34433eaaba2c59c7c66f013d3a1aa4769cf97fe1fa1986b5a6fbfa5f1ec8/diff", "WorkDir": "/var/lib/docker/overlay2/da0e34433eaaba2c59c7c66f013d3a1aa4769cf97fe1fa1986b5a6fbfa5f1ec8/work" }, "Name": "overlay2" }, "Mounts": [ { "Type": "bind", "Source": "/lib/modules", "Destination": "/lib/modules", "Mode": "ro", "RW": false, "Propagation": "rprivate" }, { "Type": "volume", "Name": "calico-20220221084934-6550", "Source": "/var/lib/docker/volumes/calico-20220221084934-6550/_data", "Destination": "/var", "Driver": "local", "Mode": "z", "RW": true, "Propagation": "" } ], "Config": { "Hostname": "calico-20220221084934-6550", "Domainname": "", "User": "root", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "ExposedPorts": { "22/tcp": {}, "2376/tcp": {}, "32443/tcp": {}, "5000/tcp": {}, "8443/tcp": {} }, "Tty": true, "OpenStdin": false, "StdinOnce": false, "Env": [ "container=docker", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" ], "Cmd": null, "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2", "Volumes": null, "WorkingDir": "", "Entrypoint": [ "/usr/local/bin/entrypoint", "/sbin/init" ], "OnBuild": null, "Labels": { "created_by.minikube.sigs.k8s.io": "true", "mode.minikube.sigs.k8s.io": "calico-20220221084934-6550", "name.minikube.sigs.k8s.io": "calico-20220221084934-6550", "role.minikube.sigs.k8s.io": "" }, "StopSignal": "SIGRTMIN+3" }, "NetworkSettings": { "Bridge": "", "SandboxID": "b3f6b92c299fab2b0618d523c664134a2b3ea294194e4ae464a452f87d8939d2", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49364" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49363" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49360" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49362" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49361" } ] }, "SandboxKey": "/var/run/docker/netns/b3f6b92c299f", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null, "EndpointID": "", "Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "MacAddress": "", "Networks": { "calico-20220221084934-6550": { "IPAMConfig": { "IPv4Address": "192.168.67.2" }, "Links": null, "Aliases": [ "7ff1dcdb7d38", "calico-20220221084934-6550" ], "NetworkID": "259ea390e5594c5573e56c602cbdaf2a91d5b217fce89343d624015685255bcb", "EndpointID": "8d63df32eefc92663e22f4efb2bd16fbb816ecbe394d0b6328ad38e288478661", "Gateway": "192.168.67.1", "IPAddress": "192.168.67.2", "IPPrefixLen": 24, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:c0:a8:43:02", "DriverOpts": null } } } } ] -- /stdout -- helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p calico-20220221084934-6550 -n calico-20220221084934-6550 helpers_test.go:245: <<< TestNetworkPlugins/group/calico FAILED: start of post-mortem logs <<< helpers_test.go:246: ======> post-mortem[TestNetworkPlugins/group/calico]: minikube logs <====== helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 -p calico-20220221084934-6550 logs -n 25 === CONT TestNetworkPlugins/group/kindnet/NetCatPod helpers_test.go:343: "netcat-668db85669-lcmt9" [0fd0efca-25d3-42b8-b210-f9f1dd5821bd] Running === CONT TestNetworkPlugins/group/calico helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p calico-20220221084934-6550 logs -n 25: (1.835390614s) helpers_test.go:253: TestNetworkPlugins/group/calico logs: -- stdout -- * * ==> Audit <== * |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| | pause | -p pause-20220221085158-6550 | pause-20220221085158-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:25 UTC | Mon, 21 Feb 2022 08:53:26 UTC | | | --alsologtostderr -v=5 | | | | | | | unpause | -p pause-20220221085158-6550 | pause-20220221085158-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:27 UTC | Mon, 21 Feb 2022 08:53:28 UTC | | | --alsologtostderr -v=5 | | | | | | | pause | -p pause-20220221085158-6550 | pause-20220221085158-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:28 UTC | Mon, 21 Feb 2022 08:53:29 UTC | | | --alsologtostderr -v=5 | | | | | | | delete | -p | kubernetes-upgrade-20220221085141-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:22 UTC | Mon, 21 Feb 2022 08:53:29 UTC | | | kubernetes-upgrade-20220221085141-6550 | | | | | | | delete | -p pause-20220221085158-6550 | pause-20220221085158-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:29 UTC | Mon, 21 Feb 2022 08:53:32 UTC | | | --alsologtostderr -v=5 | | | | | | | profile | list --output json | minikube | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:32 UTC | Mon, 21 Feb 2022 08:53:32 UTC | | delete | -p pause-20220221085158-6550 | pause-20220221085158-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:33 UTC | Mon, 21 Feb 2022 08:53:33 UTC | | start | -p | stopped-upgrade-20220221085315-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:00 UTC | Mon, 21 Feb 2022 08:54:26 UTC | | | stopped-upgrade-20220221085315-6550 | | | | | | | | --memory=2200 --alsologtostderr | | | | | | | | -v=1 --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | logs | -p | stopped-upgrade-20220221085315-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:27 UTC | Mon, 21 Feb 2022 08:54:29 UTC | | | stopped-upgrade-20220221085315-6550 | | | | | | | delete | -p | stopped-upgrade-20220221085315-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:29 UTC | Mon, 21 Feb 2022 08:54:31 UTC | | | stopped-upgrade-20220221085315-6550 | | | | | | | start | -p | cert-expiration-20220221085105-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:40 UTC | Mon, 21 Feb 2022 08:54:44 UTC | | | cert-expiration-20220221085105-6550 | | | | | | | | --memory=2048 | | | | | | | | --cert-expiration=8760h | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | delete | -p | cert-expiration-20220221085105-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:44 UTC | Mon, 21 Feb 2022 08:54:47 UTC | | | cert-expiration-20220221085105-6550 | | | | | | | start | -p cilium-20220221084934-6550 | cilium-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:33 UTC | Mon, 21 Feb 2022 08:55:10 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=cilium --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p cilium-20220221084934-6550 | cilium-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:55:15 UTC | Mon, 21 Feb 2022 08:55:16 UTC | | | pgrep -a kubelet | | | | | | | delete | -p cilium-20220221084934-6550 | cilium-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:55:29 UTC | Mon, 21 Feb 2022 08:55:33 UTC | | start | -p false-20220221084934-6550 | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:55:33 UTC | Mon, 21 Feb 2022 08:56:15 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=false --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p false-20220221084934-6550 | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:56:15 UTC | Mon, 21 Feb 2022 08:56:16 UTC | | | pgrep -a kubelet | | | | | | | start | -p auto-20220221084933-6550 | auto-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:29 UTC | Mon, 21 Feb 2022 09:01:45 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p auto-20220221084933-6550 | auto-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:01:45 UTC | Mon, 21 Feb 2022 09:01:45 UTC | | | pgrep -a kubelet | | | | | | | -p | false-20220221084934-6550 logs | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:02:41 UTC | Mon, 21 Feb 2022 09:02:42 UTC | | | -n 25 | | | | | | | delete | -p false-20220221084934-6550 | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:02:43 UTC | Mon, 21 Feb 2022 09:02:46 UTC | | -p | custom-weave-20220221084934-6550 | custom-weave-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:27 UTC | Mon, 21 Feb 2022 09:03:28 UTC | | | logs -n 25 | | | | | | | delete | -p | custom-weave-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:29 UTC | Mon, 21 Feb 2022 09:03:32 UTC | | | custom-weave-20220221084934-6550 | | | | | | | start | -p kindnet-20220221084934-6550 | kindnet-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:02:46 UTC | Mon, 21 Feb 2022 09:03:35 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=kindnet --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p kindnet-20220221084934-6550 | kindnet-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:40 UTC | Mon, 21 Feb 2022 09:03:40 UTC | | | pgrep -a kubelet | | | | | | |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| * * ==> Last Start <== * Log file created at: 2022/02/21 09:03:32 Running on machine: ubuntu-20-agent-5 Binary: Built with gc go1.17.7 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0221 09:03:32.117451 442801 out.go:297] Setting OutFile to fd 1 ... I0221 09:03:32.117835 442801 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 09:03:32.117851 442801 out.go:310] Setting ErrFile to fd 2... I0221 09:03:32.117857 442801 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 09:03:32.118132 442801 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/bin I0221 09:03:32.118890 442801 out.go:304] Setting JSON to false I0221 09:03:32.120554 442801 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2766,"bootTime":1645431446,"procs":583,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"} I0221 09:03:32.120646 442801 start.go:122] virtualization: kvm guest I0221 09:03:32.123238 442801 out.go:176] * [enable-default-cni-20220221084933-6550] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64) I0221 09:03:32.124663 442801 out.go:176] - MINIKUBE_LOCATION=13641 I0221 09:03:32.123381 442801 notify.go:193] Checking for updates... I0221 09:03:32.126005 442801 out.go:176] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true I0221 09:03:32.127444 442801 out.go:176] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 09:03:32.128833 442801 out.go:176] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube I0221 09:03:32.130126 442801 out.go:176] - MINIKUBE_BIN=out/minikube-linux-amd64 I0221 09:03:32.130603 442801 config.go:176] Loaded profile config "auto-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:03:32.130689 442801 config.go:176] Loaded profile config "calico-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:03:32.130768 442801 config.go:176] Loaded profile config "kindnet-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:03:32.130810 442801 driver.go:344] Setting default libvirt URI to qemu:///system I0221 09:03:32.183866 442801 docker.go:132] docker version: linux-20.10.12 I0221 09:03:32.184022 442801 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 09:03:32.308357 442801 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-02-21 09:03:32.224294462 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 09:03:32.308480 442801 docker.go:237] overlay module found I0221 09:03:32.310829 442801 out.go:176] * Using the docker driver based on user configuration I0221 09:03:32.310861 442801 start.go:281] selected driver: docker I0221 09:03:32.310868 442801 start.go:798] validating driver "docker" against I0221 09:03:32.310888 442801 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} W0221 09:03:32.310939 442801 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. W0221 09:03:32.310966 442801 out.go:241] ! Your cgroup does not allow setting memory. I0221 09:03:32.312796 442801 out.go:176] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities I0221 09:03:32.313594 442801 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 09:03:32.439745 442801 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-02-21 09:03:32.355381059 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 09:03:32.439886 442801 start_flags.go:288] no existing cluster config was found, will generate one from the flags I0221 09:03:32.440079 442801 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m E0221 09:03:32.440098 442801 start_flags.go:440] Found deprecated --enable-default-cni flag, setting --cni=bridge I0221 09:03:32.440112 442801 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] I0221 09:03:32.440137 442801 cni.go:93] Creating CNI manager for "bridge" I0221 09:03:32.440148 442801 start_flags.go:297] Found "bridge CNI" CNI - setting NetworkPlugin=cni I0221 09:03:32.440157 442801 start_flags.go:302] config: {Name:enable-default-cni-20220221084933-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:enable-default-cni-20220221084933-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 09:03:32.442216 442801 out.go:176] * Starting control plane node enable-default-cni-20220221084933-6550 in cluster enable-default-cni-20220221084933-6550 I0221 09:03:32.442259 442801 cache.go:120] Beginning downloading kic base image for docker with docker I0221 09:03:32.443599 442801 out.go:176] * Pulling base image ... I0221 09:03:32.443647 442801 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 09:03:32.443685 442801 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4 I0221 09:03:32.443699 442801 cache.go:57] Caching tarball of preloaded images I0221 09:03:32.443721 442801 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon I0221 09:03:32.444172 442801 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0221 09:03:32.444195 442801 cache.go:60] Finished verifying existence of preloaded tar for v1.23.4 on docker I0221 09:03:32.444392 442801 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/config.json ... I0221 09:03:32.444424 442801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/config.json: {Name:mkf0bedf552068954fb3058e8f1835930a49f413 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:03:32.506427 442801 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon, skipping pull I0221 09:03:32.506466 442801 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 exists in daemon, skipping load I0221 09:03:32.506482 442801 cache.go:208] Successfully downloaded all kic artifacts I0221 09:03:32.506546 442801 start.go:313] acquiring machines lock for enable-default-cni-20220221084933-6550: {Name:mkbc0432b219bda8857fd7f89775f7bbf9deb037 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:03:32.506717 442801 start.go:317] acquired machines lock for "enable-default-cni-20220221084933-6550" in 142.562µs I0221 09:03:32.506758 442801 start.go:89] Provisioning new machine with config: &{Name:enable-default-cni-20220221084933-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:enable-default-cni-20220221084933-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true} I0221 09:03:32.506882 442801 start.go:126] createHost starting for "" (driver="docker") I0221 09:03:31.857452 421870 node_ready.go:49] node "kindnet-20220221084934-6550" has status "Ready":"True" I0221 09:03:31.857489 421870 node_ready.go:38] duration metric: took 7.507952196s waiting for node "kindnet-20220221084934-6550" to be "Ready" ... I0221 09:03:31.857501 421870 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 09:03:31.869509 421870 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-svjnh" in "kube-system" namespace to be "Ready" ... I0221 09:03:33.384289 421870 pod_ready.go:92] pod "coredns-64897985d-svjnh" in "kube-system" namespace has status "Ready":"True" I0221 09:03:33.384318 421870 pod_ready.go:81] duration metric: took 1.51477231s waiting for pod "coredns-64897985d-svjnh" in "kube-system" namespace to be "Ready" ... I0221 09:03:33.384354 421870 pod_ready.go:78] waiting up to 5m0s for pod "etcd-kindnet-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 09:03:33.388240 421870 pod_ready.go:92] pod "etcd-kindnet-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:03:33.388260 421870 pod_ready.go:81] duration metric: took 3.893952ms waiting for pod "etcd-kindnet-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 09:03:33.388270 421870 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-kindnet-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 09:03:33.391903 421870 pod_ready.go:92] pod "kube-apiserver-kindnet-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:03:33.391931 421870 pod_ready.go:81] duration metric: took 3.653574ms waiting for pod "kube-apiserver-kindnet-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 09:03:33.391943 421870 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-kindnet-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 09:03:33.396201 421870 pod_ready.go:92] pod "kube-controller-manager-kindnet-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:03:33.396225 421870 pod_ready.go:81] duration metric: took 4.273596ms waiting for pod "kube-controller-manager-kindnet-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 09:03:33.396238 421870 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-hvpn5" in "kube-system" namespace to be "Ready" ... I0221 09:03:33.457452 421870 pod_ready.go:92] pod "kube-proxy-hvpn5" in "kube-system" namespace has status "Ready":"True" I0221 09:03:33.457474 421870 pod_ready.go:81] duration metric: took 61.229097ms waiting for pod "kube-proxy-hvpn5" in "kube-system" namespace to be "Ready" ... I0221 09:03:33.457482 421870 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-kindnet-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 09:03:33.858614 421870 pod_ready.go:92] pod "kube-scheduler-kindnet-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:03:33.858644 421870 pod_ready.go:81] duration metric: took 401.155454ms waiting for pod "kube-scheduler-kindnet-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 09:03:33.858660 421870 pod_ready.go:38] duration metric: took 2.001143433s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 09:03:33.858686 421870 api_server.go:51] waiting for apiserver process to appear ... I0221 09:03:33.858736 421870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0221 09:03:33.886848 421870 api_server.go:71] duration metric: took 9.626804383s to wait for apiserver process to appear ... I0221 09:03:33.886874 421870 api_server.go:87] waiting for apiserver healthz status ... I0221 09:03:33.886883 421870 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0221 09:03:33.892372 421870 api_server.go:266] https://192.168.49.2:8443/healthz returned 200: ok I0221 09:03:33.893460 421870 api_server.go:140] control plane version: v1.23.4 I0221 09:03:33.893483 421870 api_server.go:130] duration metric: took 6.603399ms to wait for apiserver health ... I0221 09:03:33.893493 421870 system_pods.go:43] waiting for kube-system pods to appear ... I0221 09:03:34.060826 421870 system_pods.go:59] 8 kube-system pods found I0221 09:03:34.060885 421870 system_pods.go:61] "coredns-64897985d-svjnh" [cd666a7b-1888-4f96-8615-0a625ca7c35a] Running I0221 09:03:34.060894 421870 system_pods.go:61] "etcd-kindnet-20220221084934-6550" [0a0638f5-5420-442a-bb3e-e9b3d10b1ca9] Running I0221 09:03:34.060900 421870 system_pods.go:61] "kindnet-b7vpv" [70703c09-41bc-4c02-9ccf-df45333fbc70] Running I0221 09:03:34.060906 421870 system_pods.go:61] "kube-apiserver-kindnet-20220221084934-6550" [6423a441-9bd2-4e30-a8c1-cd811fe6d38d] Running I0221 09:03:34.060912 421870 system_pods.go:61] "kube-controller-manager-kindnet-20220221084934-6550" [531d4d33-73de-4bcb-a2a5-9c884784ee41] Running I0221 09:03:34.060919 421870 system_pods.go:61] "kube-proxy-hvpn5" [eac36e6a-fd59-49e4-a536-c2aa610984ef] Running I0221 09:03:34.060938 421870 system_pods.go:61] "kube-scheduler-kindnet-20220221084934-6550" [d6e5d38f-b3a5-4b88-baf3-99269615bd6b] Running I0221 09:03:34.060944 421870 system_pods.go:61] "storage-provisioner" [84ae4f8f-baa9-4b02-a1f6-5d9026e71769] Running I0221 09:03:34.060950 421870 system_pods.go:74] duration metric: took 167.447613ms to wait for pod list to return data ... I0221 09:03:34.060958 421870 default_sa.go:34] waiting for default service account to be created ... I0221 09:03:34.323808 421870 default_sa.go:45] found service account: "default" I0221 09:03:34.323844 421870 default_sa.go:55] duration metric: took 262.878661ms for default service account to be created ... I0221 09:03:34.323854 421870 system_pods.go:116] waiting for k8s-apps to be running ... I0221 09:03:34.547698 421870 system_pods.go:86] 8 kube-system pods found I0221 09:03:34.547726 421870 system_pods.go:89] "coredns-64897985d-svjnh" [cd666a7b-1888-4f96-8615-0a625ca7c35a] Running I0221 09:03:34.547732 421870 system_pods.go:89] "etcd-kindnet-20220221084934-6550" [0a0638f5-5420-442a-bb3e-e9b3d10b1ca9] Running I0221 09:03:34.547736 421870 system_pods.go:89] "kindnet-b7vpv" [70703c09-41bc-4c02-9ccf-df45333fbc70] Running I0221 09:03:34.547743 421870 system_pods.go:89] "kube-apiserver-kindnet-20220221084934-6550" [6423a441-9bd2-4e30-a8c1-cd811fe6d38d] Running I0221 09:03:34.547751 421870 system_pods.go:89] "kube-controller-manager-kindnet-20220221084934-6550" [531d4d33-73de-4bcb-a2a5-9c884784ee41] Running I0221 09:03:34.547757 421870 system_pods.go:89] "kube-proxy-hvpn5" [eac36e6a-fd59-49e4-a536-c2aa610984ef] Running I0221 09:03:34.547763 421870 system_pods.go:89] "kube-scheduler-kindnet-20220221084934-6550" [d6e5d38f-b3a5-4b88-baf3-99269615bd6b] Running I0221 09:03:34.547774 421870 system_pods.go:89] "storage-provisioner" [84ae4f8f-baa9-4b02-a1f6-5d9026e71769] Running I0221 09:03:34.547786 421870 system_pods.go:126] duration metric: took 223.925846ms to wait for k8s-apps to be running ... I0221 09:03:34.547799 421870 system_svc.go:44] waiting for kubelet service to be running .... I0221 09:03:34.547838 421870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0221 09:03:34.559249 421870 system_svc.go:56] duration metric: took 11.445985ms WaitForService to wait for kubelet. I0221 09:03:34.559274 421870 kubeadm.go:548] duration metric: took 10.299235376s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ... I0221 09:03:34.559291 421870 node_conditions.go:102] verifying NodePressure condition ... I0221 09:03:34.978164 421870 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki I0221 09:03:34.978194 421870 node_conditions.go:123] node cpu capacity is 8 I0221 09:03:34.978207 421870 node_conditions.go:105] duration metric: took 418.912308ms to run NodePressure ... I0221 09:03:34.978216 421870 start.go:213] waiting for startup goroutines ... I0221 09:03:35.015542 421870 start.go:496] kubectl: 1.23.4, cluster: 1.23.4 (minor skew: 0) I0221 09:03:35.019140 421870 out.go:176] * Done! kubectl is now configured to use "kindnet-20220221084934-6550" cluster and "default" namespace by default I0221 09:03:32.509100 442801 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ... I0221 09:03:32.509428 442801 start.go:160] libmachine.API.Create for "enable-default-cni-20220221084933-6550" (driver="docker") I0221 09:03:32.509468 442801 client.go:168] LocalClient.Create starting I0221 09:03:32.509561 442801 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem I0221 09:03:32.509601 442801 main.go:130] libmachine: Decoding PEM data... I0221 09:03:32.509626 442801 main.go:130] libmachine: Parsing certificate... I0221 09:03:32.509694 442801 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem I0221 09:03:32.509715 442801 main.go:130] libmachine: Decoding PEM data... I0221 09:03:32.509741 442801 main.go:130] libmachine: Parsing certificate... I0221 09:03:32.510145 442801 cli_runner.go:133] Run: docker network inspect enable-default-cni-20220221084933-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0221 09:03:32.553881 442801 cli_runner.go:180] docker network inspect enable-default-cni-20220221084933-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0221 09:03:32.553962 442801 network_create.go:254] running [docker network inspect enable-default-cni-20220221084933-6550] to gather additional debugging logs... I0221 09:03:32.553988 442801 cli_runner.go:133] Run: docker network inspect enable-default-cni-20220221084933-6550 W0221 09:03:32.600997 442801 cli_runner.go:180] docker network inspect enable-default-cni-20220221084933-6550 returned with exit code 1 I0221 09:03:32.601052 442801 network_create.go:257] error running [docker network inspect enable-default-cni-20220221084933-6550]: docker network inspect enable-default-cni-20220221084933-6550: exit status 1 stdout: [] stderr: Error: No such network: enable-default-cni-20220221084933-6550 I0221 09:03:32.601067 442801 network_create.go:259] output of [docker network inspect enable-default-cni-20220221084933-6550]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: enable-default-cni-20220221084933-6550 ** /stderr ** I0221 09:03:32.601145 442801 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 09:03:32.649708 442801 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-5d96ab4d6b1a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:58:0b:cb:43}} I0221 09:03:32.651477 442801 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc00060ec80] misses:0} I0221 09:03:32.651529 442801 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}} I0221 09:03:32.651564 442801 network_create.go:106] attempt to create docker network enable-default-cni-20220221084933-6550 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ... I0221 09:03:32.651625 442801 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220221084933-6550 I0221 09:03:32.737383 442801 network_create.go:90] docker network enable-default-cni-20220221084933-6550 192.168.58.0/24 created I0221 09:03:32.737424 442801 kic.go:106] calculated static IP "192.168.58.2" for the "enable-default-cni-20220221084933-6550" container I0221 09:03:32.737487 442801 cli_runner.go:133] Run: docker ps -a --format {{.Names}} I0221 09:03:32.783684 442801 cli_runner.go:133] Run: docker volume create enable-default-cni-20220221084933-6550 --label name.minikube.sigs.k8s.io=enable-default-cni-20220221084933-6550 --label created_by.minikube.sigs.k8s.io=true I0221 09:03:32.821012 442801 oci.go:102] Successfully created a docker volume enable-default-cni-20220221084933-6550 I0221 09:03:32.821103 442801 cli_runner.go:133] Run: docker run --rm --name enable-default-cni-20220221084933-6550-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20220221084933-6550 --entrypoint /usr/bin/test -v enable-default-cni-20220221084933-6550:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -d /var/lib I0221 09:03:33.446837 442801 oci.go:106] Successfully prepared a docker volume enable-default-cni-20220221084933-6550 I0221 09:03:33.446882 442801 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 09:03:33.446898 442801 kic.go:179] Starting extracting preloaded images to volume ... I0221 09:03:33.446952 442801 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-20220221084933-6550:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir I0221 09:03:39.157671 442801 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-20220221084933-6550:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir: (5.710682586s) I0221 09:03:39.157708 442801 kic.go:188] duration metric: took 5.710806 seconds to extract preloaded images to volume W0221 09:03:39.157755 442801 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted. W0221 09:03:39.157770 442801 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. I0221 09:03:39.157823 442801 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'" I0221 09:03:39.287910 442801 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-20220221084933-6550 --name enable-default-cni-20220221084933-6550 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20220221084933-6550 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-20220221084933-6550 --network enable-default-cni-20220221084933-6550 --ip 192.168.58.2 --volume enable-default-cni-20220221084933-6550:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 I0221 09:03:39.785302 442801 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220221084933-6550 --format={{.State.Running}} I0221 09:03:39.825814 442801 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220221084933-6550 --format={{.State.Status}} I0221 09:03:39.866107 442801 cli_runner.go:133] Run: docker exec enable-default-cni-20220221084933-6550 stat /var/lib/dpkg/alternatives/iptables I0221 09:03:39.933887 442801 oci.go:281] the created container "enable-default-cni-20220221084933-6550" has a running status. I0221 09:03:39.933924 442801 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/enable-default-cni-20220221084933-6550/id_rsa... I0221 09:03:40.203939 442801 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/enable-default-cni-20220221084933-6550/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0221 09:03:40.305594 442801 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220221084933-6550 --format={{.State.Status}} I0221 09:03:40.345149 442801 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0221 09:03:40.345176 442801 kic_runner.go:114] Args: [docker exec --privileged enable-default-cni-20220221084933-6550 chown docker:docker /home/docker/.ssh/authorized_keys] I0221 09:03:40.447477 442801 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220221084933-6550 --format={{.State.Status}} I0221 09:03:40.486059 442801 machine.go:88] provisioning docker machine ... I0221 09:03:40.486095 442801 ubuntu.go:169] provisioning hostname "enable-default-cni-20220221084933-6550" I0221 09:03:40.486163 442801 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220221084933-6550 I0221 09:03:40.528341 442801 main.go:130] libmachine: Using SSH client type: native I0221 09:03:40.528564 442801 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49389 } I0221 09:03:40.528587 442801 main.go:130] libmachine: About to run SSH command: sudo hostname enable-default-cni-20220221084933-6550 && echo "enable-default-cni-20220221084933-6550" | sudo tee /etc/hostname I0221 09:03:40.672510 442801 main.go:130] libmachine: SSH cmd err, output: : enable-default-cni-20220221084933-6550 I0221 09:03:40.672575 442801 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220221084933-6550 I0221 09:03:40.714009 442801 main.go:130] libmachine: Using SSH client type: native I0221 09:03:40.714173 442801 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49389 } I0221 09:03:40.714203 442801 main.go:130] libmachine: About to run SSH command: if ! grep -xq '.*\senable-default-cni-20220221084933-6550' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-20220221084933-6550/g' /etc/hosts; else echo '127.0.1.1 enable-default-cni-20220221084933-6550' | sudo tee -a /etc/hosts; fi fi I0221 09:03:40.839246 442801 main.go:130] libmachine: SSH cmd err, output: : I0221 09:03:40.839282 442801 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube} I0221 09:03:40.839317 442801 ubuntu.go:177] setting up certificates I0221 09:03:40.839328 442801 provision.go:83] configureAuth start I0221 09:03:40.839376 442801 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-20220221084933-6550 I0221 09:03:40.878295 442801 provision.go:138] copyHostCerts I0221 09:03:40.878356 442801 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem, removing ... I0221 09:03:40.878363 442801 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem I0221 09:03:40.878423 442801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem (1082 bytes) I0221 09:03:40.878507 442801 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem, removing ... I0221 09:03:40.878522 442801 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem I0221 09:03:40.878544 442801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem (1123 bytes) I0221 09:03:40.878603 442801 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem, removing ... I0221 09:03:40.878613 442801 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem I0221 09:03:40.878632 442801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem (1675 bytes) I0221 09:03:40.878693 442801 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-20220221084933-6550 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube enable-default-cni-20220221084933-6550] I0221 09:03:41.118770 442801 provision.go:172] copyRemoteCerts I0221 09:03:41.118849 442801 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0221 09:03:41.118898 442801 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220221084933-6550 I0221 09:03:41.165106 442801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49389 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/enable-default-cni-20220221084933-6550/id_rsa Username:docker} I0221 09:03:41.259401 442801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0221 09:03:41.283168 442801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes) I0221 09:03:41.352800 442801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem --> /etc/docker/server.pem (1285 bytes) I0221 09:03:41.372972 442801 provision.go:86] duration metric: configureAuth took 533.625844ms I0221 09:03:41.373005 442801 ubuntu.go:193] setting minikube options for container-runtime I0221 09:03:41.373216 442801 config.go:176] Loaded profile config "enable-default-cni-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:03:41.373276 442801 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220221084933-6550 I0221 09:03:41.411159 442801 main.go:130] libmachine: Using SSH client type: native I0221 09:03:41.411354 442801 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49389 } I0221 09:03:41.411372 442801 main.go:130] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0221 09:03:41.539301 442801 main.go:130] libmachine: SSH cmd err, output: : overlay I0221 09:03:41.539328 442801 ubuntu.go:71] root file system type: overlay I0221 09:03:41.539501 442801 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0221 09:03:41.539561 442801 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220221084933-6550 I0221 09:03:41.577090 442801 main.go:130] libmachine: Using SSH client type: native I0221 09:03:41.577270 442801 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49389 } I0221 09:03:41.577373 442801 main.go:130] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0221 09:03:41.719982 442801 main.go:130] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0221 09:03:41.720076 442801 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220221084933-6550 I0221 09:03:41.761362 442801 main.go:130] libmachine: Using SSH client type: native I0221 09:03:41.761534 442801 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49389 } I0221 09:03:41.761562 442801 main.go:130] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0221 09:03:42.457894 442801 main.go:130] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2021-12-13 11:43:42.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2022-02-21 09:03:41.712293296 +0000 @@ -1,30 +1,32 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com +BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target -Requires=docker.socket containerd.service +Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always - -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 +Restart=on-failure -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +34,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0221 09:03:42.457928 442801 machine.go:91] provisioned docker machine in 1.971846976s I0221 09:03:42.457939 442801 client.go:171] LocalClient.Create took 9.948461628s I0221 09:03:42.457949 442801 start.go:168] duration metric: libmachine.API.Create for "enable-default-cni-20220221084933-6550" took 9.948522593s I0221 09:03:42.457958 442801 start.go:267] post-start starting for "enable-default-cni-20220221084933-6550" (driver="docker") I0221 09:03:42.457964 442801 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0221 09:03:42.458031 442801 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0221 09:03:42.458081 442801 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220221084933-6550 I0221 09:03:42.500407 442801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49389 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/enable-default-cni-20220221084933-6550/id_rsa Username:docker} I0221 09:03:42.591041 442801 ssh_runner.go:195] Run: cat /etc/os-release I0221 09:03:42.593837 442801 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0221 09:03:42.593864 442801 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0221 09:03:42.593877 442801 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0221 09:03:42.593884 442801 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0221 09:03:42.593900 442801 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/addons for local assets ... I0221 09:03:42.593960 442801 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files for local assets ... I0221 09:03:42.594044 442801 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem -> 65502.pem in /etc/ssl/certs I0221 09:03:42.594142 442801 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs I0221 09:03:42.600714 442801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /etc/ssl/certs/65502.pem (1708 bytes) I0221 09:03:42.628017 442801 start.go:270] post-start completed in 170.038678ms I0221 09:03:42.628418 442801 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-20220221084933-6550 I0221 09:03:42.675220 442801 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/config.json ... I0221 09:03:42.675482 442801 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0221 09:03:42.675527 442801 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220221084933-6550 I0221 09:03:42.716687 442801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49389 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/enable-default-cni-20220221084933-6550/id_rsa Username:docker} I0221 09:03:42.808692 442801 start.go:129] duration metric: createHost completed in 10.301791872s I0221 09:03:42.808724 442801 start.go:80] releasing machines lock for "enable-default-cni-20220221084933-6550", held for 10.301985759s I0221 09:03:42.808816 442801 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-20220221084933-6550 I0221 09:03:42.857057 442801 ssh_runner.go:195] Run: systemctl --version I0221 09:03:42.857100 442801 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220221084933-6550 I0221 09:03:42.857099 442801 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/ I0221 09:03:42.857145 442801 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220221084933-6550 I0221 09:03:42.897241 442801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49389 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/enable-default-cni-20220221084933-6550/id_rsa Username:docker} I0221 09:03:42.900654 442801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49389 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/enable-default-cni-20220221084933-6550/id_rsa Username:docker} I0221 09:03:43.147753 442801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0221 09:03:43.159383 442801 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 09:03:43.176145 442801 cruntime.go:272] skipping containerd shutdown because we are bound to it I0221 09:03:43.176217 442801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0221 09:03:43.186740 442801 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0221 09:03:43.200685 442801 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0221 09:03:43.301648 442801 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0221 09:03:43.402146 442801 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 09:03:43.414491 442801 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0221 09:03:43.527390 442801 ssh_runner.go:195] Run: sudo systemctl start docker I0221 09:03:43.539354 442801 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 09:03:43.584645 442801 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 09:03:43.653516 442801 out.go:203] * Preparing Kubernetes v1.23.4 on Docker 20.10.12 ... I0221 09:03:43.653610 442801 cli_runner.go:133] Run: docker network inspect enable-default-cni-20220221084933-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 09:03:43.696151 442801 ssh_runner.go:195] Run: grep 192.168.58.1 host.minikube.internal$ /etc/hosts I0221 09:03:43.700610 442801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 09:03:44.827562 223679 system_pods.go:86] 9 kube-system pods found I0221 09:03:44.827595 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 09:03:44.827608 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 09:03:44.827618 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:03:44.827630 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 09:03:44.827637 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 09:03:44.827644 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 09:03:44.827654 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 09:03:44.827659 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 09:03:44.827674 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:03:44.830160 223679 out.go:176] W0221 09:03:44.830324 223679 out.go:241] X Exiting due to GUEST_START: wait 5m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns W0221 09:03:44.830341 223679 out.go:241] * W0221 09:03:44.831471 223679 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮ │ │ │ * If the above advice does not help, please let us know: │ │ https://github.com/kubernetes/minikube/issues/new/choose │ │ │ │ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │ │ │ ╰─────────────────────────────────────────────────────────────────────────────────────────────╯ * * ==> Docker <== * -- Logs begin at Mon 2022-02-21 08:54:40 UTC, end at Mon 2022-02-21 09:03:46 UTC. -- Feb 21 09:03:27 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:27.616092005Z" level=info msg="ignoring event" container=4f38b9bacf0339ab30f3436eb4e78170e83b897fd87d4bddd553ef838a6901a2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:28 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:28.688701203Z" level=info msg="ignoring event" container=266cb1873eea9e5440e92a2b3d8794297cc19242738d3fabd7ee7b539ff28661 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:28 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:28.760163062Z" level=info msg="ignoring event" container=d91975382968c4f8c92ab1dab5eb8c09e5acbe64aec249eb478bb9f9eca510d3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:29 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:29.770733170Z" level=info msg="ignoring event" container=f870ea5d901515d7e9c45252deba9281ecd6249fe43d402c850863b52502d649 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:29 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:29.770797352Z" level=info msg="ignoring event" container=32ec078ce49383855c916443bb37868add2bff7fb49f40c8b68dd3c61ce3c523 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:30 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:30.722591239Z" level=info msg="ignoring event" container=c65e94d51288f1800288c9dc15b69a2625681562db53d70f88c668e7c6cd1ab4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:30 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:30.728235231Z" level=info msg="ignoring event" container=947ed2826f896f07877b3d66696c29359a8c8e4b491bb70f937521fa4b0470a9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:31 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:31.833362930Z" level=info msg="ignoring event" container=10c07c311cd6727e3d87d29eafaf85b6ad8c002e94e6e24e19fc6e2229cce2d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:31 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:31.859950749Z" level=info msg="ignoring event" container=56ad2ebf8044a4657ba0e43a1232e327ff77629650e7d34cf462eb7dfdda4115 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:32 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:32.838635969Z" level=info msg="ignoring event" container=4d196a793a0e212fb92fed4479917ec76c27131fa9df2039a63f3dc1531b8e4d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:32 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:32.856575938Z" level=info msg="ignoring event" container=5be8782476baf29d3c0883b4c4fd66ccd7a5744fea303ba80a6d161d1a4b0a8d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:33 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:33.957320035Z" level=info msg="ignoring event" container=af1a2681d7777d89e805f17b60b2a5fa92731bbca52b61d26c45ae5f7883037d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:33 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:33.960477494Z" level=info msg="ignoring event" container=1c68c7348674e640b144fbe091fed64f6f50834f005826f61db52b8a242b6140 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:39 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:39.715963092Z" level=info msg="ignoring event" container=47c9ca5c7ca2166ecd6637c39266e1f1107cc861529675c315c9f52f427ad2f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:39 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:39.726593389Z" level=info msg="ignoring event" container=55974a0b5221147225025d52816b8fc4db03bc2ed52233c8229dce191b6b34c9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:41 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:41.172528995Z" level=info msg="ignoring event" container=548e41e757130a87f329572331aad3a75812426cc5e189abb4deb9252cbe494b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:41 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:41.230879350Z" level=info msg="ignoring event" container=8cb09ad5b28be399b234008b66bfa44f3fcbfb05983336097526e19abe8fa42d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:42 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:42.253693773Z" level=info msg="ignoring event" container=ac0166177349ff0cee57998c7d4a25e26b658410d3dfdfc78412638245b9b726 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:42 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:42.255202569Z" level=info msg="ignoring event" container=8c15340c79ad79476c9efeb87d8be294d9f1d33c6175fbe853c798b97608f995 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:43 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:43.443283756Z" level=info msg="ignoring event" container=4cd175e1b597000f773c29015e31a6bc8122156dc74fdb3c51789eedf01bc86e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:43 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:43.460207969Z" level=info msg="ignoring event" container=933b20157b6cdd68fc037a26bbbf3d2b151e1dd24fe666b8f25de2a411ba828b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:44 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:44.441722301Z" level=info msg="ignoring event" container=fa0aeb35b4bd828626ec602baed3d034a8cff63c353b49c2b12e8c653868cd90 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:44 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:44.462639442Z" level=info msg="ignoring event" container=e9b9d78c45ad7e593cce6a33a8e269837889b17c4383c750379b5d03698389ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:45 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:45.663133371Z" level=info msg="ignoring event" container=c0c2b655a97ae019aa88cc9432fc59faccab48fe8673445b88480e27e6e1d1f9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:45 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:45.717696781Z" level=info msg="ignoring event" container=50a4b9c1073c01deb2038992f9533c976151df62452a9f2a2982d6f1d3d51473 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID dbdc8cd6a1ce1 5ef66b403f4f0 2 minutes ago Exited calico-node 5 c0d320ac30a9b 6d88e003ae4d3 6e38f40d628db 3 minutes ago Exited storage-provisioner 5 5d2b5639f06c5 1bc1aa4df9f17 calico/pod2daemon-flexvol@sha256:c17e3e9871682bed00bfd33f8d6f00db1d1a126034a25bf5380355978e0c548d 8 minutes ago Exited flexvol-driver 0 c0d320ac30a9b 01afaa16a59b8 4945b742b8e66 8 minutes ago Exited install-cni 0 c0d320ac30a9b 3d508836fbe39 calico/cni@sha256:9906e2cca8006e1fe9fc3f358a3a06da6253afdd6fad05d594e884e8298ffe1d 8 minutes ago Exited upgrade-ipam 0 c0d320ac30a9b 449cc37a92fe7 2114245ec4d6b 8 minutes ago Running kube-proxy 0 a0f0400a1b94e f012d1d45e221 aceacb6244f9f 8 minutes ago Running kube-scheduler 0 cb8998c81feab 96cc9489b33e5 25f8c7f3da61c 8 minutes ago Running etcd 0 566db401d5d43 cddc9ef001f2d 25444908517a5 8 minutes ago Running kube-controller-manager 0 aa8fb7fa6d1d3 5b808a7ef4a26 62930710c9634 8 minutes ago Running kube-apiserver 0 169b39b50a62e * * ==> describe nodes <== * Name: calico-20220221084934-6550 Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=calico-20220221084934-6550 kubernetes.io/os=linux minikube.k8s.io/commit=9a4c39b14752448ee8b5da7f8bb397c2a16d9ea9 minikube.k8s.io/name=calico-20220221084934-6550 minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2022_02_21T08_55_04_0700 minikube.k8s.io/version=v1.25.1 node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 21 Feb 2022 08:55:01 +0000 Taints: Unschedulable: false Lease: HolderIdentity: calico-20220221084934-6550 AcquireTime: RenewTime: Mon, 21 Feb 2022 09:03:44 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Mon, 21 Feb 2022 09:01:12 +0000 Mon, 21 Feb 2022 08:55:01 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Mon, 21 Feb 2022 09:01:12 +0000 Mon, 21 Feb 2022 08:55:01 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Mon, 21 Feb 2022 09:01:12 +0000 Mon, 21 Feb 2022 08:55:01 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Mon, 21 Feb 2022 09:01:12 +0000 Mon, 21 Feb 2022 08:55:13 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.67.2 Hostname: calico-20220221084934-6550 Capacity: cpu: 8 ephemeral-storage: 304695084Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32874648Ki pods: 110 Allocatable: cpu: 8 ephemeral-storage: 304695084Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32874648Ki pods: 110 System Info: Machine ID: b6a262faae404a5db719705fd34b5c8b System UUID: b97b2c97-fa91-4271-b3ba-befe7b7ea324 Boot ID: 36f9c729-2a96-4807-bb74-314dc2113999 Kernel Version: 5.11.0-1029-gcp OS Image: Ubuntu 20.04.2 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.12 Kubelet Version: v1.23.4 Kube-Proxy Version: v1.23.4 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (9 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- kube-system calico-kube-controllers-8594699699-ftdtm 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m32s kube-system calico-node-zcdj6 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m33s kube-system coredns-64897985d-r75jc 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 8m32s kube-system etcd-calico-20220221084934-6550 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 8m43s kube-system kube-apiserver-calico-20220221084934-6550 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m43s kube-system kube-controller-manager-calico-20220221084934-6550 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m43s kube-system kube-proxy-kwcvx 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m33s kube-system kube-scheduler-calico-20220221084934-6550 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m43s kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m30s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1 (12%!)(MISSING) 0 (0%!)(MISSING) memory 170Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 8m31s kube-proxy Normal Starting 8m58s kubelet Starting kubelet. Normal NodeAllocatableEnforced 8m58s kubelet Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 8m57s (x4 over 8m58s) kubelet Node calico-20220221084934-6550 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 8m57s (x3 over 8m58s) kubelet Node calico-20220221084934-6550 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 8m57s (x3 over 8m58s) kubelet Node calico-20220221084934-6550 status is now: NodeHasSufficientPID Normal NodeHasSufficientMemory 8m43s kubelet Node calico-20220221084934-6550 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 8m43s kubelet Node calico-20220221084934-6550 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 8m43s kubelet Node calico-20220221084934-6550 status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 8m43s kubelet Updated Node Allocatable limit across pods Normal Starting 8m43s kubelet Starting kubelet. Normal NodeReady 8m33s kubelet Node calico-20220221084934-6550 status is now: NodeReady * * ==> dmesg <== * [ +0.000007] ll header: 00000000: ff ff ff ff ff ff da 80 7d 07 f0 ca 08 06 [ +2.561210] IPv4: martian source 10.85.0.159 from 10.85.0.159, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 23 e1 c4 83 2c 08 06 [ +2.615653] IPv4: martian source 10.85.0.160 from 10.85.0.160, on dev eth0 [ +0.000005] ll header: 00000000: ff ff ff ff ff ff 8e 64 41 7f 5e 31 08 06 [ +2.733452] IPv4: martian source 10.85.0.161 from 10.85.0.161, on dev eth0 [ +0.000008] ll header: 00000000: ff ff ff ff ff ff da fc d1 c9 f2 2a 08 06 [ +2.883194] IPv4: martian source 10.85.0.162 from 10.85.0.162, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 1a 5e d5 29 ea a8 08 06 [ +2.455339] IPv4: martian source 10.85.0.163 from 10.85.0.163, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff 9e 50 c8 60 43 de 08 06 [ +2.674144] IPv4: martian source 10.85.0.164 from 10.85.0.164, on dev eth0 [ +0.000008] ll header: 00000000: ff ff ff ff ff ff ae b8 d8 5c 06 86 08 06 [ +2.173451] IPv4: martian source 10.85.0.165 from 10.85.0.165, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff b6 23 71 a2 17 13 08 06 [ +3.191430] IPv4: martian source 10.85.0.166 from 10.85.0.166, on dev eth0 [ +0.000008] ll header: 00000000: ff ff ff ff ff ff fa ee 02 4a fe dc 08 06 [ +3.010319] IPv4: martian source 10.85.0.167 from 10.85.0.167, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff be 1f 49 7a 27 ae 08 06 [ +3.012859] IPv4: martian source 10.85.0.168 from 10.85.0.168, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff ca 7f b6 f0 26 29 08 06 [ +4.014892] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth3bf823e9 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 3a 4d b5 7d b0 08 06 [ +8.773962] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth9d08a992 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff ae fa 77 2a a4 f4 08 06 * * ==> etcd [96cc9489b33e] <== * {"level":"info","ts":"2022-02-21T08:54:55.335Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2022-02-21T08:54:55.336Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]} {"level":"info","ts":"2022-02-21T08:54:55.336Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"} {"level":"info","ts":"2022-02-21T08:54:55.336Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.67.2:2380"} {"level":"info","ts":"2022-02-21T08:54:55.336Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.67.2:2380"} {"level":"info","ts":"2022-02-21T08:54:56.321Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"} {"level":"info","ts":"2022-02-21T08:54:56.321Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"} {"level":"info","ts":"2022-02-21T08:54:56.321Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"} {"level":"info","ts":"2022-02-21T08:54:56.321Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"} {"level":"info","ts":"2022-02-21T08:54:56.321Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"} {"level":"info","ts":"2022-02-21T08:54:56.321Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"} {"level":"info","ts":"2022-02-21T08:54:56.321Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"} {"level":"info","ts":"2022-02-21T08:54:56.321Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:calico-20220221084934-6550 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"} {"level":"info","ts":"2022-02-21T08:54:56.321Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2022-02-21T08:54:56.321Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T08:54:56.321Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2022-02-21T08:54:56.321Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"} {"level":"info","ts":"2022-02-21T08:54:56.321Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"} {"level":"info","ts":"2022-02-21T08:54:56.323Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"} {"level":"info","ts":"2022-02-21T08:54:56.323Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T08:54:56.323Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T08:54:56.323Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T08:54:56.326Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"} {"level":"warn","ts":"2022-02-21T08:55:39.983Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"273.653772ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:422"} {"level":"info","ts":"2022-02-21T08:55:39.983Z","caller":"traceutil/trace.go:171","msg":"trace[311069928] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:619; }","duration":"273.793799ms","start":"2022-02-21T08:55:39.709Z","end":"2022-02-21T08:55:39.983Z","steps":["trace[311069928] 'agreement among raft nodes before linearized reading' (duration: 87.453612ms)","trace[311069928] 'range keys from in-memory index tree' (duration: 186.156694ms)"],"step_count":2} * * ==> kernel <== * 09:03:47 up 46 min, 0 users, load average: 4.79, 4.56, 3.65 Linux calico-20220221084934-6550 5.11.0-1029-gcp #33~20.04.3-Ubuntu SMP Tue Jan 18 12:03:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.2 LTS" * * ==> kube-apiserver [5b808a7ef4a2] <== * I0221 08:54:58.302165 1 shared_informer.go:247] Caches are synced for node_authorizer I0221 08:54:58.302218 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0221 08:54:58.302232 1 cache.go:39] Caches are synced for autoregister controller I0221 08:54:58.302298 1 apf_controller.go:322] Running API Priority and Fairness config worker I0221 08:54:58.302435 1 cache.go:39] Caches are synced for AvailableConditionController controller I0221 08:54:59.057676 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0221 08:54:59.062371 1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000 I0221 08:54:59.065126 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0221 08:54:59.065494 1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000 I0221 08:54:59.065514 1 storage_scheduling.go:109] all system priority classes are created successfully or already exist. I0221 08:54:59.452494 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0221 08:54:59.482209 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0221 08:54:59.624710 1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1] W0221 08:54:59.629617 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.67.2] I0221 08:54:59.630542 1 controller.go:611] quota admission added evaluator for: endpoints I0221 08:54:59.634181 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io I0221 08:55:00.240484 1 controller.go:611] quota admission added evaluator for: serviceaccounts I0221 08:55:02.953918 1 controller.go:611] quota admission added evaluator for: deployments.apps I0221 08:55:02.962144 1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10] I0221 08:55:02.971590 1 controller.go:611] quota admission added evaluator for: daemonsets.apps I0221 08:55:03.150647 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io I0221 08:55:04.749784 1 controller.go:611] quota admission added evaluator for: poddisruptionbudgets.policy I0221 08:55:13.694331 1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps I0221 08:55:13.794266 1 controller.go:611] quota admission added evaluator for: replicasets.apps I0221 08:55:15.516406 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io * * ==> kube-controller-manager [cddc9ef001f2] <== * W0221 08:55:23.405231 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: "" E0221 08:55:23.405240 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" E0221 08:55:23.405420 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input W0221 08:55:23.405433 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: "" E0221 08:55:23.405444 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" E0221 08:55:23.405712 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input W0221 08:55:23.405735 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: "" E0221 08:55:23.405759 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" E0221 08:55:23.406053 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input W0221 08:55:23.406068 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: "" E0221 08:55:23.406085 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" E0221 08:55:23.406285 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input W0221 08:55:23.406298 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: "" E0221 08:55:23.406310 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" E0221 08:55:23.406555 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input W0221 08:55:23.406568 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: "" E0221 08:55:23.406581 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" E0221 08:55:23.406798 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input W0221 08:55:23.406810 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: "" E0221 08:55:23.406826 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" E0221 08:55:23.407275 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input W0221 08:55:23.407296 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: "" E0221 08:55:23.407313 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" I0221 08:55:43.750560 1 shared_informer.go:240] Waiting for caches to sync for garbage collector I0221 08:55:44.551239 1 shared_informer.go:247] Caches are synced for garbage collector * * ==> kube-proxy [449cc37a92fe] <== * I0221 08:55:15.461389 1 node.go:163] Successfully retrieved node IP: 192.168.67.2 I0221 08:55:15.461460 1 server_others.go:138] "Detected node IP" address="192.168.67.2" I0221 08:55:15.461489 1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode="" I0221 08:55:15.512835 1 server_others.go:206] "Using iptables Proxier" I0221 08:55:15.512880 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4 I0221 08:55:15.512893 1 server_others.go:214] "Creating dualStackProxier for iptables" I0221 08:55:15.512916 1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6" I0221 08:55:15.513378 1 server.go:656] "Version info" version="v1.23.4" I0221 08:55:15.513997 1 config.go:317] "Starting service config controller" I0221 08:55:15.514024 1 shared_informer.go:240] Waiting for caches to sync for service config I0221 08:55:15.514116 1 config.go:226] "Starting endpoint slice config controller" I0221 08:55:15.514123 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I0221 08:55:15.614579 1 shared_informer.go:247] Caches are synced for endpoint slice config I0221 08:55:15.614643 1 shared_informer.go:247] Caches are synced for service config * * ==> kube-scheduler [f012d1d45e22] <== * W0221 08:54:58.217986 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0221 08:54:58.218793 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0221 08:54:58.217997 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0221 08:54:58.218812 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0221 08:54:58.218055 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0221 08:54:58.218846 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0221 08:54:58.218067 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0221 08:54:58.218875 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0221 08:54:58.218174 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0221 08:54:58.218888 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0221 08:54:58.218260 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0221 08:54:58.218900 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0221 08:54:58.218378 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0221 08:54:58.218922 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0221 08:54:58.218504 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0221 08:54:58.218933 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0221 08:54:58.218946 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0221 08:54:58.218978 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0221 08:54:59.168312 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0221 08:54:59.168351 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0221 08:54:59.207809 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0221 08:54:59.207897 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0221 08:54:59.246521 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0221 08:54:59.246561 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope I0221 08:54:59.714113 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Mon 2022-02-21 08:54:40 UTC, end at Mon 2022-02-21 09:03:47 UTC. -- Feb 21 09:03:45 calico-20220221084934-6550 kubelet[2000]: E0221 09:03:45.710223 2000 kuberuntime_manager.go:832] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"c0c2b655a97ae019aa88cc9432fc59faccab48fe8673445b88480e27e6e1d1f9\" network for pod \"coredns-64897985d-r75jc\": networkPlugin cni failed to set up pod \"coredns-64897985d-r75jc_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-64897985d-r75jc" Feb 21 09:03:45 calico-20220221084934-6550 kubelet[2000]: E0221 09:03:45.710313 2000 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-64897985d-r75jc_kube-system(8b61f5f5-e695-42e1-8247-797a3d90eef7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-64897985d-r75jc_kube-system(8b61f5f5-e695-42e1-8247-797a3d90eef7)\\\": rpc error: code = Unknown desc = failed to set up sandbox container \\\"c0c2b655a97ae019aa88cc9432fc59faccab48fe8673445b88480e27e6e1d1f9\\\" network for pod \\\"coredns-64897985d-r75jc\\\": networkPlugin cni failed to set up pod \\\"coredns-64897985d-r75jc_kube-system\\\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-64897985d-r75jc" podUID=8b61f5f5-e695-42e1-8247-797a3d90eef7 Feb 21 09:03:45 calico-20220221084934-6550 kubelet[2000]: E0221 09:03:45.735736 2000 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"50a4b9c1073c01deb2038992f9533c976151df62452a9f2a2982d6f1d3d51473\" network for pod \"calico-kube-controllers-8594699699-ftdtm\": networkPlugin cni failed to set up pod \"calico-kube-controllers-8594699699-ftdtm_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 21 09:03:45 calico-20220221084934-6550 kubelet[2000]: E0221 09:03:45.735804 2000 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to set up sandbox container \"50a4b9c1073c01deb2038992f9533c976151df62452a9f2a2982d6f1d3d51473\" network for pod \"calico-kube-controllers-8594699699-ftdtm\": networkPlugin cni failed to set up pod \"calico-kube-controllers-8594699699-ftdtm_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/calico-kube-controllers-8594699699-ftdtm" Feb 21 09:03:45 calico-20220221084934-6550 kubelet[2000]: E0221 09:03:45.735828 2000 kuberuntime_manager.go:832] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"50a4b9c1073c01deb2038992f9533c976151df62452a9f2a2982d6f1d3d51473\" network for pod \"calico-kube-controllers-8594699699-ftdtm\": networkPlugin cni failed to set up pod \"calico-kube-controllers-8594699699-ftdtm_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/calico-kube-controllers-8594699699-ftdtm" Feb 21 09:03:45 calico-20220221084934-6550 kubelet[2000]: E0221 09:03:45.735892 2000 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8594699699-ftdtm_kube-system(198a6a8f-4d1b-44fc-9a43-3166e582db73)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8594699699-ftdtm_kube-system(198a6a8f-4d1b-44fc-9a43-3166e582db73)\\\": rpc error: code = Unknown desc = failed to set up sandbox container \\\"50a4b9c1073c01deb2038992f9533c976151df62452a9f2a2982d6f1d3d51473\\\" network for pod \\\"calico-kube-controllers-8594699699-ftdtm\\\": networkPlugin cni failed to set up pod \\\"calico-kube-controllers-8594699699-ftdtm_kube-system\\\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/calico-kube-controllers-8594699699-ftdtm" podUID=198a6a8f-4d1b-44fc-9a43-3166e582db73 Feb 21 09:03:45 calico-20220221084934-6550 kubelet[2000]: I0221 09:03:45.850866 2000 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"coredns-64897985d-r75jc_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"c0c2b655a97ae019aa88cc9432fc59faccab48fe8673445b88480e27e6e1d1f9\"" Feb 21 09:03:45 calico-20220221084934-6550 kubelet[2000]: I0221 09:03:45.873464 2000 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="c0c2b655a97ae019aa88cc9432fc59faccab48fe8673445b88480e27e6e1d1f9" Feb 21 09:03:45 calico-20220221084934-6550 kubelet[2000]: I0221 09:03:45.875481 2000 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"c0c2b655a97ae019aa88cc9432fc59faccab48fe8673445b88480e27e6e1d1f9\"" Feb 21 09:03:45 calico-20220221084934-6550 kubelet[2000]: I0221 09:03:45.880374 2000 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"calico-kube-controllers-8594699699-ftdtm_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"50a4b9c1073c01deb2038992f9533c976151df62452a9f2a2982d6f1d3d51473\"" Feb 21 09:03:45 calico-20220221084934-6550 kubelet[2000]: I0221 09:03:45.902960 2000 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="50a4b9c1073c01deb2038992f9533c976151df62452a9f2a2982d6f1d3d51473" Feb 21 09:03:45 calico-20220221084934-6550 kubelet[2000]: I0221 09:03:45.904900 2000 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"50a4b9c1073c01deb2038992f9533c976151df62452a9f2a2982d6f1d3d51473\"" Feb 21 09:03:46 calico-20220221084934-6550 kubelet[2000]: E0221 09:03:46.706342 2000 cni.go:362] "Error adding pod to network" err="stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-64897985d-r75jc" podSandboxID={Type:docker ID:a3fbe31f1502211400f87005d0a496236112c0851b47684c0e566d01f4018495} podNetnsPath="/proc/168070/ns/net" networkType="calico" networkName="k8s-pod-network" Feb 21 09:03:46 calico-20220221084934-6550 kubelet[2000]: E0221 09:03:46.711457 2000 cni.go:362] "Error adding pod to network" err="stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/calico-kube-controllers-8594699699-ftdtm" podSandboxID={Type:docker ID:64b1f2be9066823ac47543a08725a98359fcb14d7dd9d60c1f0e534433c5021e} podNetnsPath="/proc/168108/ns/net" networkType="calico" networkName="k8s-pod-network" Feb 21 09:03:46 calico-20220221084934-6550 kubelet[2000]: E0221 09:03:46.943006 2000 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"a3fbe31f1502211400f87005d0a496236112c0851b47684c0e566d01f4018495\" network for pod \"coredns-64897985d-r75jc\": networkPlugin cni failed to set up pod \"coredns-64897985d-r75jc_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 21 09:03:46 calico-20220221084934-6550 kubelet[2000]: E0221 09:03:46.943091 2000 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to set up sandbox container \"a3fbe31f1502211400f87005d0a496236112c0851b47684c0e566d01f4018495\" network for pod \"coredns-64897985d-r75jc\": networkPlugin cni failed to set up pod \"coredns-64897985d-r75jc_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-64897985d-r75jc" Feb 21 09:03:46 calico-20220221084934-6550 kubelet[2000]: E0221 09:03:46.943122 2000 kuberuntime_manager.go:832] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"a3fbe31f1502211400f87005d0a496236112c0851b47684c0e566d01f4018495\" network for pod \"coredns-64897985d-r75jc\": networkPlugin cni failed to set up pod \"coredns-64897985d-r75jc_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-64897985d-r75jc" Feb 21 09:03:46 calico-20220221084934-6550 kubelet[2000]: E0221 09:03:46.943178 2000 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-64897985d-r75jc_kube-system(8b61f5f5-e695-42e1-8247-797a3d90eef7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-64897985d-r75jc_kube-system(8b61f5f5-e695-42e1-8247-797a3d90eef7)\\\": rpc error: code = Unknown desc = failed to set up sandbox container \\\"a3fbe31f1502211400f87005d0a496236112c0851b47684c0e566d01f4018495\\\" network for pod \\\"coredns-64897985d-r75jc\\\": networkPlugin cni failed to set up pod \\\"coredns-64897985d-r75jc_kube-system\\\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-64897985d-r75jc" podUID=8b61f5f5-e695-42e1-8247-797a3d90eef7 Feb 21 09:03:46 calico-20220221084934-6550 kubelet[2000]: I0221 09:03:46.944023 2000 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"coredns-64897985d-r75jc_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a3fbe31f1502211400f87005d0a496236112c0851b47684c0e566d01f4018495\"" Feb 21 09:03:46 calico-20220221084934-6550 kubelet[2000]: E0221 09:03:46.944936 2000 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"64b1f2be9066823ac47543a08725a98359fcb14d7dd9d60c1f0e534433c5021e\" network for pod \"calico-kube-controllers-8594699699-ftdtm\": networkPlugin cni failed to set up pod \"calico-kube-controllers-8594699699-ftdtm_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 21 09:03:46 calico-20220221084934-6550 kubelet[2000]: E0221 09:03:46.944989 2000 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to set up sandbox container \"64b1f2be9066823ac47543a08725a98359fcb14d7dd9d60c1f0e534433c5021e\" network for pod \"calico-kube-controllers-8594699699-ftdtm\": networkPlugin cni failed to set up pod \"calico-kube-controllers-8594699699-ftdtm_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/calico-kube-controllers-8594699699-ftdtm" Feb 21 09:03:46 calico-20220221084934-6550 kubelet[2000]: E0221 09:03:46.945017 2000 kuberuntime_manager.go:832] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"64b1f2be9066823ac47543a08725a98359fcb14d7dd9d60c1f0e534433c5021e\" network for pod \"calico-kube-controllers-8594699699-ftdtm\": networkPlugin cni failed to set up pod \"calico-kube-controllers-8594699699-ftdtm_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/calico-kube-controllers-8594699699-ftdtm" Feb 21 09:03:46 calico-20220221084934-6550 kubelet[2000]: E0221 09:03:46.945101 2000 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8594699699-ftdtm_kube-system(198a6a8f-4d1b-44fc-9a43-3166e582db73)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8594699699-ftdtm_kube-system(198a6a8f-4d1b-44fc-9a43-3166e582db73)\\\": rpc error: code = Unknown desc = failed to set up sandbox container \\\"64b1f2be9066823ac47543a08725a98359fcb14d7dd9d60c1f0e534433c5021e\\\" network for pod \\\"calico-kube-controllers-8594699699-ftdtm\\\": networkPlugin cni failed to set up pod \\\"calico-kube-controllers-8594699699-ftdtm_kube-system\\\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/calico-kube-controllers-8594699699-ftdtm" podUID=198a6a8f-4d1b-44fc-9a43-3166e582db73 Feb 21 09:03:47 calico-20220221084934-6550 kubelet[2000]: I0221 09:03:47.020379 2000 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"calico-kube-controllers-8594699699-ftdtm_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"64b1f2be9066823ac47543a08725a98359fcb14d7dd9d60c1f0e534433c5021e\"" Feb 21 09:03:47 calico-20220221084934-6550 kubelet[2000]: I0221 09:03:47.045975 2000 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"64b1f2be9066823ac47543a08725a98359fcb14d7dd9d60c1f0e534433c5021e\"" * * ==> storage-provisioner [6d88e003ae4d] <== * I0221 09:00:41.449434 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... F0221 09:01:11.451563 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout -- /stdout -- helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p calico-20220221084934-6550 -n calico-20220221084934-6550 helpers_test.go:262: (dbg) Run: kubectl --context calico-20220221084934-6550 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running helpers_test.go:271: non-running pods: calico-kube-controllers-8594699699-ftdtm coredns-64897985d-r75jc helpers_test.go:273: ======> post-mortem[TestNetworkPlugins/group/calico]: describe non-running pods <====== helpers_test.go:276: (dbg) Run: kubectl --context calico-20220221084934-6550 describe pod calico-kube-controllers-8594699699-ftdtm coredns-64897985d-r75jc helpers_test.go:276: (dbg) Non-zero exit: kubectl --context calico-20220221084934-6550 describe pod calico-kube-controllers-8594699699-ftdtm coredns-64897985d-r75jc: exit status 1 (68.83274ms) ** stderr ** Error from server (NotFound): pods "calico-kube-controllers-8594699699-ftdtm" not found Error from server (NotFound): pods "coredns-64897985d-r75jc" not found ** /stderr ** helpers_test.go:278: kubectl --context calico-20220221084934-6550 describe pod calico-kube-controllers-8594699699-ftdtm coredns-64897985d-r75jc: exit status 1 helpers_test.go:176: Cleaning up "calico-20220221084934-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p calico-20220221084934-6550 helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p calico-20220221084934-6550: (2.893399255s) === CONT TestNetworkPlugins/group/bridge === RUN TestNetworkPlugins/group/bridge/Start net_test.go:99: (dbg) Run: out/minikube-linux-amd64 start -p bridge-20220221084933-6550 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker --container-runtime=docker === CONT TestNetworkPlugins/group/kindnet/NetCatPod net_test.go:146: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.007918178s === RUN TestNetworkPlugins/group/kindnet/DNS net_test.go:163: (dbg) Run: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default === CONT TestNetworkPlugins/group/auto/DNS net_test.go:163: (dbg) Run: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default E0221 09:04:05.983652 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory === CONT TestNetworkPlugins/group/kindnet/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.200766854s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** === CONT TestNetworkPlugins/group/auto/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.156779625s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** === CONT TestNetworkPlugins/group/kindnet/DNS net_test.go:163: (dbg) Run: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default === CONT TestNetworkPlugins/group/auto/DNS net_test.go:163: (dbg) Run: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default === CONT TestNetworkPlugins/group/kindnet/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.148465058s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default === CONT TestNetworkPlugins/group/auto/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.13259579s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 09:04:33.148416 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory === CONT TestNetworkPlugins/group/kindnet/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.140488104s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default === CONT TestNetworkPlugins/group/auto/DNS net_test.go:163: (dbg) Run: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default === CONT TestNetworkPlugins/group/kindnet/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.136630445s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default === CONT TestNetworkPlugins/group/auto/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.126335273s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 09:05:10.800096 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory === CONT TestNetworkPlugins/group/kindnet/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.148676284s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** === CONT TestNetworkPlugins/group/auto/DNS net_test.go:163: (dbg) Run: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default === CONT TestNetworkPlugins/group/kindnet/DNS net_test.go:163: (dbg) Run: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default === CONT TestNetworkPlugins/group/auto/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.133340056s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** === CONT TestNetworkPlugins/group/kindnet/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.129512436s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 09:05:38.483426 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory net_test.go:163: (dbg) Run: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default === CONT TestNetworkPlugins/group/auto/DNS net_test.go:163: (dbg) Run: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default === CONT TestNetworkPlugins/group/kindnet/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.128902243s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default === CONT TestNetworkPlugins/group/auto/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.133325385s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 09:06:16.369831 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory E0221 09:06:16.375077 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory E0221 09:06:16.385327 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory E0221 09:06:16.405618 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory E0221 09:06:16.445952 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory E0221 09:06:16.526233 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory E0221 09:06:16.686635 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory E0221 09:06:17.007118 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory E0221 09:06:17.648208 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory E0221 09:06:18.928460 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory === CONT TestNetworkPlugins/group/kindnet/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.12772282s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 09:06:21.489618 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory E0221 09:06:26.610648 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory E0221 09:06:32.220908 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory E0221 09:06:36.851605 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory net_test.go:163: (dbg) Run: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.137517782s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 09:06:57.332109 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory === CONT TestNetworkPlugins/group/auto/DNS net_test.go:163: (dbg) Run: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.144808472s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1 net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"* === CONT TestNetworkPlugins/group/auto net_test.go:198: "auto" test finished in 17m46.663428147s, failed=true net_test.go:199: *** TestNetworkPlugins/group/auto FAILED at 2022-02-21 09:07:20.424948873 +0000 UTC m=+2533.187268467 helpers_test.go:223: -----------------------post-mortem-------------------------------- helpers_test.go:231: ======> post-mortem[TestNetworkPlugins/group/auto]: docker inspect <====== helpers_test.go:232: (dbg) Run: docker inspect auto-20220221084933-6550 helpers_test.go:236: (dbg) docker inspect auto-20220221084933-6550: -- stdout -- [ { "Id": "14e23cc18317663f9fe57cb405230b41185fa9f7f233ca15f6ec32d64c887e9d", "Created": "2022-02-21T08:56:59.944923949Z", "Path": "/usr/local/bin/entrypoint", "Args": [ "/sbin/init" ], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 275744, "ExitCode": 0, "Error": "", "StartedAt": "2022-02-21T08:57:00.400031147Z", "FinishedAt": "0001-01-01T00:00:00Z" }, "Image": "sha256:1312ccd2422d964b2df363d606d0c016d6acbc1ddf0211c26a74717f2897dc43", "ResolvConfPath": "/var/lib/docker/containers/14e23cc18317663f9fe57cb405230b41185fa9f7f233ca15f6ec32d64c887e9d/resolv.conf", "HostnamePath": "/var/lib/docker/containers/14e23cc18317663f9fe57cb405230b41185fa9f7f233ca15f6ec32d64c887e9d/hostname", "HostsPath": "/var/lib/docker/containers/14e23cc18317663f9fe57cb405230b41185fa9f7f233ca15f6ec32d64c887e9d/hosts", "LogPath": "/var/lib/docker/containers/14e23cc18317663f9fe57cb405230b41185fa9f7f233ca15f6ec32d64c887e9d/14e23cc18317663f9fe57cb405230b41185fa9f7f233ca15f6ec32d64c887e9d-json.log", "Name": "/auto-20220221084933-6550", "RestartCount": 0, "Driver": "overlay2", "Platform": "linux", "MountLabel": "", "ProcessLabel": "", "AppArmorProfile": "unconfined", "ExecIDs": null, "HostConfig": { "Binds": [ "/lib/modules:/lib/modules:ro", "auto-20220221084933-6550:/var" ], "ContainerIDFile": "", "LogConfig": { "Type": "json-file", "Config": {} }, "NetworkMode": "auto-20220221084933-6550", "PortBindings": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ] }, "RestartPolicy": { "Name": "no", "MaximumRetryCount": 0 }, "AutoRemove": false, "VolumeDriver": "", "VolumesFrom": null, "CapAdd": null, "CapDrop": null, "CgroupnsMode": "host", "Dns": [], "DnsOptions": [], "DnsSearch": [], "ExtraHosts": null, "GroupAdd": null, "IpcMode": "private", "Cgroup": "", "Links": null, "OomScoreAdj": 0, "PidMode": "", "Privileged": true, "PublishAllPorts": false, "ReadonlyRootfs": false, "SecurityOpt": [ "seccomp=unconfined", "apparmor=unconfined", "label=disable" ], "Tmpfs": { "/run": "", "/tmp": "" }, "UTSMode": "", "UsernsMode": "", "ShmSize": 67108864, "Runtime": "runc", "ConsoleSize": [ 0, 0 ], "Isolation": "", "CpuShares": 0, "Memory": 0, "NanoCpus": 2000000000, "CgroupParent": "", "BlkioWeight": 0, "BlkioWeightDevice": [], "BlkioDeviceReadBps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteIOps": null, "CpuPeriod": 0, "CpuQuota": 0, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": [ { "PathOnHost": "/dev/fuse", "PathInContainer": "/dev/fuse", "CgroupPermissions": "rwm" } ], "DeviceCgroupRules": null, "DeviceRequests": null, "KernelMemory": 0, "KernelMemoryTCP": 0, "MemoryReservation": 0, "MemorySwap": 0, "MemorySwappiness": null, "OomKillDisable": false, "PidsLimit": null, "Ulimits": null, "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0, "MaskedPaths": null, "ReadonlyPaths": null }, "GraphDriver": { "Data": { "LowerDir": "/var/lib/docker/overlay2/5ab0074a8ea0796f69fee69831f0118dd2c6851670ee5682833df7e80c58ce88-init/diff:/var/lib/docker/overlay2/a149cc5f42de5038ba3e5f21bd82a6356aa9456aec11a3690e569db72ca8eaf5/diff:/var/lib/docker/overlay2/fce4ef34d92ff253276822af091ec16578b924486e0e565c7f34b479bdbe1606/diff:/var/lib/docker/overlay2/a38e10ed288ad73a8ce637c6219d5b13318bb72210ce4fb6291dd57d238df611/diff:/var/lib/docker/overlay2/93c18ddce49a4eb2d344b0579d7f58df999d53bb329042f78a5b3f40296f83cd/diff:/var/lib/docker/overlay2/d9ee14262b31fff87924cf913d835b02182e6a8dc3783ecb5f67dbd29faa079c/diff:/var/lib/docker/overlay2/8a0b39fc5ebcb9761ad433102244050fab339d3e027a5a30a7e01d121fe362e4/diff:/var/lib/docker/overlay2/7a6b41deef37b6400fac0c94871f73040d4cae157de2e929b7a0bf96f0e353aa/diff:/var/lib/docker/overlay2/49d913ee587718efcf521a65de0109df5c2c42644f75690beed97a437d623b66/diff:/var/lib/docker/overlay2/89e8fa960313dfb6ece4007576bf5d89e2b0ea68a2bde9de5527647a7df6ff0f/diff:/var/lib/docker/overlay2/0d2632e6ee2f4198f3c7c0d37c31ee03d33864e0131ae85dd3001adecfae53db/diff:/var/lib/docker/overlay2/b28a3b2bbe2e231d00863aaaf6be28d2e5a76197f2be7934f18720a71f2435a8/diff:/var/lib/docker/overlay2/6f88e83ff5bb26bf909248d8df5eacf4d01f72523927fd7433a925c525fbd274/diff:/var/lib/docker/overlay2/c2547d18d2639fa842b0977952fa69cffd7b076e8db357ef0ec62fae9676619b/diff:/var/lib/docker/overlay2/34465ba37bb8eef87906c38cb891a738c54a16dbf47d27fef95cee96f0b62c4f/diff:/var/lib/docker/overlay2/50d3383518bb8e6a01e1cfcae621b8c2238dc54ae94d63606d145633045bc6cd/diff:/var/lib/docker/overlay2/83561f107aed2bf9cb840ff66aceb882d0e39bcced79ef27ce67c17e063ba536/diff:/var/lib/docker/overlay2/bfb7c0c91e0a7649286abe3a6b02df4585a931ffa3a6b1e4846aa1b98eb0d997/diff:/var/lib/docker/overlay2/80589044da151f3d3eed7824d236bc8a19697182abca8610eb34f1c3cae53bf0/diff:/var/lib/docker/overlay2/5a336e956e467ad50830aa4b9ed5b65cb3ef08e6d49a4900acd0440ca3c03c3a/diff:/var/lib/docker/overlay2/f7b3446c0dca80a2053183e360de8f99e3a31e87ad633fcc23f14b2045643596/diff:/var/lib/docker/overlay2/03688a16272f13f172ed8baa181df2a1702e08edbc9c5a4a767265adb2c00b79/diff:/var/lib/docker/overlay2/c5445afaabab58221dbffedde261b3f9948116ccba1344b6f3dd4fdfaf5c98cd/diff:/var/lib/docker/overlay2/704464fd20f23f93d0ad3c3c58103f01fc8a0d76d70bc191f718273e7412bb42/diff:/var/lib/docker/overlay2/a344f9ca1bf1b674ebf560c2a0498913ed90c836e3b4f5b1968022f7f7e54055/diff:/var/lib/docker/overlay2/929f053e6a327a7203e8d8c281f827d02ad922fdabbdfcc5453003daba88cdf4/diff:/var/lib/docker/overlay2/d6a160f0ffe94e5abf99d928a638f6a6220b24c7ba643a438c7557146d240f70/diff:/var/lib/docker/overlay2/c6b58e187a2df10fd55e19e6ad38c20f5ebf5e62617a59b6ce4afe0fc712c0f6/diff:/var/lib/docker/overlay2/fef43deeff878af42cc1f31a1f265b66e642aa166cec02eba5a0c82fb6d70872/diff:/var/lib/docker/overlay2/0d7bbf19182056a05efca89485df54da5fba9903000517fc9e7c2072aa7ba62e/diff:/var/lib/docker/overlay2/5840748964e9ce3890a2fa4fc2e3fb0635426f944b42b3128808a7aca816bf48/diff:/var/lib/docker/overlay2/a94126962e9f346960e3f570b96e0a6e987d255fcba30bf3fd76b227e66275c6/diff:/var/lib/docker/overlay2/211ba88e1557a352838b4896dff8efac70330c1424cff237613c4186994ec037/diff:/var/lib/docker/overlay2/c04720d23411d6693832990e2c79bc529ade609401eaeee8d159f5a6d74d986e/diff:/var/lib/docker/overlay2/70bfe328c89d3a3902a4ba591874fe936425b208ec8e39c2e25d60f21b212300/diff:/var/lib/docker/overlay2/7dcce8bec6dd9561387116c9f724b9101e94e15c0e19c7880ce2b4e96b2921a6/diff:/var/lib/docker/overlay2/81463813805e91be42d9f71b5c63823a80abaaa42d8638034eaddcd66c148310/diff:/var/lib/docker/overlay2/c7dbb1344cf7167b07d58d49902e0db9335bc92a71a324c15dadc14c0aa67c45/diff:/var/lib/docker/overlay2/909df0d6b92b92bcf6e376f0e33d120c53cd24ad0b997d31b5a84f3d1ab89769/diff:/var/lib/docker/overlay2/eacfeaee6ed4aa14e847adad2c6d03439c53944654cc2779cc5c80d239ab323c/diff:/var/lib/docker/overlay2/48c58492169f9043495b11b47439f4dcdeb168525e5bb3a8c130c1ddd5ec882d/diff:/var/lib/docker/overlay2/4520a69f91e766b7694473c2107cc0644394d65da900259eb5933779accdc86d/diff:/var/lib/docker/overlay2/68aafdd300c4b4ebf94976a352959e75fa170653a69e13da38650a447cd5cb2f/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394cfc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff", "MergedDir": "/var/lib/docker/overlay2/5ab0074a8ea0796f69fee69831f0118dd2c6851670ee5682833df7e80c58ce88/merged", "UpperDir": "/var/lib/docker/overlay2/5ab0074a8ea0796f69fee69831f0118dd2c6851670ee5682833df7e80c58ce88/diff", "WorkDir": "/var/lib/docker/overlay2/5ab0074a8ea0796f69fee69831f0118dd2c6851670ee5682833df7e80c58ce88/work" }, "Name": "overlay2" }, "Mounts": [ { "Type": "bind", "Source": "/lib/modules", "Destination": "/lib/modules", "Mode": "ro", "RW": false, "Propagation": "rprivate" }, { "Type": "volume", "Name": "auto-20220221084933-6550", "Source": "/var/lib/docker/volumes/auto-20220221084933-6550/_data", "Destination": "/var", "Driver": "local", "Mode": "z", "RW": true, "Propagation": "" } ], "Config": { "Hostname": "auto-20220221084933-6550", "Domainname": "", "User": "root", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "ExposedPorts": { "22/tcp": {}, "2376/tcp": {}, "32443/tcp": {}, "5000/tcp": {}, "8443/tcp": {} }, "Tty": true, "OpenStdin": false, "StdinOnce": false, "Env": [ "container=docker", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" ], "Cmd": null, "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2", "Volumes": null, "WorkingDir": "", "Entrypoint": [ "/usr/local/bin/entrypoint", "/sbin/init" ], "OnBuild": null, "Labels": { "created_by.minikube.sigs.k8s.io": "true", "mode.minikube.sigs.k8s.io": "auto-20220221084933-6550", "name.minikube.sigs.k8s.io": "auto-20220221084933-6550", "role.minikube.sigs.k8s.io": "" }, "StopSignal": "SIGRTMIN+3" }, "NetworkSettings": { "Bridge": "", "SandboxID": "bc21dc1487002ea911d18ddad607e56bce375fd30c415325f2c8ad8a51175f58", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49379" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49378" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49375" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49377" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49376" } ] }, "SandboxKey": "/var/run/docker/netns/bc21dc148700", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null, "EndpointID": "", "Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "MacAddress": "", "Networks": { "auto-20220221084933-6550": { "IPAMConfig": { "IPv4Address": "192.168.76.2" }, "Links": null, "Aliases": [ "14e23cc18317", "auto-20220221084933-6550" ], "NetworkID": "b94a766473076f24d64d27d7767effe55cd2409ed2b6d5964dc439f32cedab19", "EndpointID": "18759bcb1f5666f27fc37ee6b69b003f2272f85d2456e612f032471e98395eee", "Gateway": "192.168.76.1", "IPAddress": "192.168.76.2", "IPPrefixLen": 24, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:c0:a8:4c:02", "DriverOpts": null } } } } ] -- /stdout -- helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p auto-20220221084933-6550 -n auto-20220221084933-6550 helpers_test.go:245: <<< TestNetworkPlugins/group/auto FAILED: start of post-mortem logs <<< helpers_test.go:246: ======> post-mortem[TestNetworkPlugins/group/auto]: minikube logs <====== helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 -p auto-20220221084933-6550 logs -n 25 helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p auto-20220221084933-6550 logs -n 25: (1.215894629s) helpers_test.go:253: TestNetworkPlugins/group/auto logs: -- stdout -- * * ==> Audit <== * |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| | pause | -p pause-20220221085158-6550 | pause-20220221085158-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:28 UTC | Mon, 21 Feb 2022 08:53:29 UTC | | | --alsologtostderr -v=5 | | | | | | | delete | -p | kubernetes-upgrade-20220221085141-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:22 UTC | Mon, 21 Feb 2022 08:53:29 UTC | | | kubernetes-upgrade-20220221085141-6550 | | | | | | | delete | -p pause-20220221085158-6550 | pause-20220221085158-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:29 UTC | Mon, 21 Feb 2022 08:53:32 UTC | | | --alsologtostderr -v=5 | | | | | | | profile | list --output json | minikube | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:32 UTC | Mon, 21 Feb 2022 08:53:32 UTC | | delete | -p pause-20220221085158-6550 | pause-20220221085158-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:33 UTC | Mon, 21 Feb 2022 08:53:33 UTC | | start | -p | stopped-upgrade-20220221085315-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:00 UTC | Mon, 21 Feb 2022 08:54:26 UTC | | | stopped-upgrade-20220221085315-6550 | | | | | | | | --memory=2200 --alsologtostderr | | | | | | | | -v=1 --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | logs | -p | stopped-upgrade-20220221085315-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:27 UTC | Mon, 21 Feb 2022 08:54:29 UTC | | | stopped-upgrade-20220221085315-6550 | | | | | | | delete | -p | stopped-upgrade-20220221085315-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:29 UTC | Mon, 21 Feb 2022 08:54:31 UTC | | | stopped-upgrade-20220221085315-6550 | | | | | | | start | -p | cert-expiration-20220221085105-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:40 UTC | Mon, 21 Feb 2022 08:54:44 UTC | | | cert-expiration-20220221085105-6550 | | | | | | | | --memory=2048 | | | | | | | | --cert-expiration=8760h | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | delete | -p | cert-expiration-20220221085105-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:44 UTC | Mon, 21 Feb 2022 08:54:47 UTC | | | cert-expiration-20220221085105-6550 | | | | | | | start | -p cilium-20220221084934-6550 | cilium-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:33 UTC | Mon, 21 Feb 2022 08:55:10 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=cilium --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p cilium-20220221084934-6550 | cilium-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:55:15 UTC | Mon, 21 Feb 2022 08:55:16 UTC | | | pgrep -a kubelet | | | | | | | delete | -p cilium-20220221084934-6550 | cilium-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:55:29 UTC | Mon, 21 Feb 2022 08:55:33 UTC | | start | -p false-20220221084934-6550 | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:55:33 UTC | Mon, 21 Feb 2022 08:56:15 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=false --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p false-20220221084934-6550 | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:56:15 UTC | Mon, 21 Feb 2022 08:56:16 UTC | | | pgrep -a kubelet | | | | | | | start | -p auto-20220221084933-6550 | auto-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:29 UTC | Mon, 21 Feb 2022 09:01:45 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p auto-20220221084933-6550 | auto-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:01:45 UTC | Mon, 21 Feb 2022 09:01:45 UTC | | | pgrep -a kubelet | | | | | | | -p | false-20220221084934-6550 logs | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:02:41 UTC | Mon, 21 Feb 2022 09:02:42 UTC | | | -n 25 | | | | | | | delete | -p false-20220221084934-6550 | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:02:43 UTC | Mon, 21 Feb 2022 09:02:46 UTC | | -p | custom-weave-20220221084934-6550 | custom-weave-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:27 UTC | Mon, 21 Feb 2022 09:03:28 UTC | | | logs -n 25 | | | | | | | delete | -p | custom-weave-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:29 UTC | Mon, 21 Feb 2022 09:03:32 UTC | | | custom-weave-20220221084934-6550 | | | | | | | start | -p kindnet-20220221084934-6550 | kindnet-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:02:46 UTC | Mon, 21 Feb 2022 09:03:35 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=kindnet --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p kindnet-20220221084934-6550 | kindnet-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:40 UTC | Mon, 21 Feb 2022 09:03:40 UTC | | | pgrep -a kubelet | | | | | | | -p | calico-20220221084934-6550 | calico-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:45 UTC | Mon, 21 Feb 2022 09:03:47 UTC | | | logs -n 25 | | | | | | | delete | -p calico-20220221084934-6550 | calico-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:48 UTC | Mon, 21 Feb 2022 09:03:50 UTC | |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| * * ==> Last Start <== * Log file created at: 2022/02/21 09:03:51 Running on machine: ubuntu-20-agent-5 Binary: Built with gc go1.17.7 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0221 09:03:51.048978 450843 out.go:297] Setting OutFile to fd 1 ... I0221 09:03:51.049079 450843 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 09:03:51.049091 450843 out.go:310] Setting ErrFile to fd 2... I0221 09:03:51.049098 450843 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 09:03:51.049264 450843 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/bin I0221 09:03:51.049642 450843 out.go:304] Setting JSON to false I0221 09:03:51.072350 450843 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2785,"bootTime":1645431446,"procs":576,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"} I0221 09:03:51.072451 450843 start.go:122] virtualization: kvm guest I0221 09:03:51.075112 450843 out.go:176] * [bridge-20220221084933-6550] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64) I0221 09:03:51.076523 450843 out.go:176] - MINIKUBE_LOCATION=13641 I0221 09:03:51.075281 450843 notify.go:193] Checking for updates... I0221 09:03:51.077790 450843 out.go:176] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true I0221 09:03:51.079195 450843 out.go:176] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 09:03:51.080510 450843 out.go:176] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube I0221 09:03:51.081799 450843 out.go:176] - MINIKUBE_BIN=out/minikube-linux-amd64 I0221 09:03:51.082286 450843 config.go:176] Loaded profile config "auto-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:03:51.082382 450843 config.go:176] Loaded profile config "enable-default-cni-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:03:51.082456 450843 config.go:176] Loaded profile config "kindnet-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:03:51.082505 450843 driver.go:344] Setting default libvirt URI to qemu:///system I0221 09:03:51.135679 450843 docker.go:132] docker version: linux-20.10.12 I0221 09:03:51.135786 450843 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 09:03:51.248907 450843 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-02-21 09:03:51.169795922 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 09:03:51.249068 450843 docker.go:237] overlay module found I0221 09:03:51.252015 450843 out.go:176] * Using the docker driver based on user configuration I0221 09:03:51.252048 450843 start.go:281] selected driver: docker I0221 09:03:51.252053 450843 start.go:798] validating driver "docker" against I0221 09:03:51.252073 450843 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} W0221 09:03:51.252125 450843 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. W0221 09:03:51.252146 450843 out.go:241] ! Your cgroup does not allow setting memory. I0221 09:03:51.253396 450843 out.go:176] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities I0221 09:03:51.253973 450843 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 09:03:51.362904 450843 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-02-21 09:03:51.284159132 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 09:03:51.363053 450843 start_flags.go:288] no existing cluster config was found, will generate one from the flags I0221 09:03:51.363204 450843 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m I0221 09:03:51.363228 450843 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] I0221 09:03:51.363244 450843 cni.go:93] Creating CNI manager for "bridge" I0221 09:03:51.363252 450843 start_flags.go:297] Found "bridge CNI" CNI - setting NetworkPlugin=cni I0221 09:03:51.363269 450843 start_flags.go:302] config: {Name:bridge-20220221084933-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:bridge-20220221084933-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 09:03:51.366013 450843 out.go:176] * Starting control plane node bridge-20220221084933-6550 in cluster bridge-20220221084933-6550 I0221 09:03:51.366043 450843 cache.go:120] Beginning downloading kic base image for docker with docker I0221 09:03:51.367305 450843 out.go:176] * Pulling base image ... I0221 09:03:51.367334 450843 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 09:03:51.367368 450843 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4 I0221 09:03:51.367383 450843 cache.go:57] Caching tarball of preloaded images I0221 09:03:51.367436 450843 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon I0221 09:03:51.367599 450843 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0221 09:03:51.367626 450843 cache.go:60] Finished verifying existence of preloaded tar for v1.23.4 on docker I0221 09:03:51.367731 450843 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/config.json ... I0221 09:03:51.367754 450843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/config.json: {Name:mk9f30a296298673b7d3985a1a22baf15a0d8519 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:03:51.418870 450843 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon, skipping pull I0221 09:03:51.418908 450843 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 exists in daemon, skipping load I0221 09:03:51.418928 450843 cache.go:208] Successfully downloaded all kic artifacts I0221 09:03:51.418966 450843 start.go:313] acquiring machines lock for bridge-20220221084933-6550: {Name:mk5df6888113cf2604548c3a60d88507d1709053 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:03:51.419213 450843 start.go:317] acquired machines lock for "bridge-20220221084933-6550" in 225.518µs I0221 09:03:51.419251 450843 start.go:89] Provisioning new machine with config: &{Name:bridge-20220221084933-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:bridge-20220221084933-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true} I0221 09:03:51.419356 450843 start.go:126] createHost starting for "" (driver="docker") I0221 09:03:49.110101 442801 out.go:203] - Booting up control plane ... I0221 09:03:51.421794 450843 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ... I0221 09:03:51.422033 450843 start.go:160] libmachine.API.Create for "bridge-20220221084933-6550" (driver="docker") I0221 09:03:51.422065 450843 client.go:168] LocalClient.Create starting I0221 09:03:51.422157 450843 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem I0221 09:03:51.422198 450843 main.go:130] libmachine: Decoding PEM data... I0221 09:03:51.422218 450843 main.go:130] libmachine: Parsing certificate... I0221 09:03:51.422289 450843 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem I0221 09:03:51.422318 450843 main.go:130] libmachine: Decoding PEM data... I0221 09:03:51.422337 450843 main.go:130] libmachine: Parsing certificate... I0221 09:03:51.422664 450843 cli_runner.go:133] Run: docker network inspect bridge-20220221084933-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0221 09:03:51.458778 450843 cli_runner.go:180] docker network inspect bridge-20220221084933-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0221 09:03:51.458867 450843 network_create.go:254] running [docker network inspect bridge-20220221084933-6550] to gather additional debugging logs... I0221 09:03:51.458907 450843 cli_runner.go:133] Run: docker network inspect bridge-20220221084933-6550 W0221 09:03:51.492681 450843 cli_runner.go:180] docker network inspect bridge-20220221084933-6550 returned with exit code 1 I0221 09:03:51.492712 450843 network_create.go:257] error running [docker network inspect bridge-20220221084933-6550]: docker network inspect bridge-20220221084933-6550: exit status 1 stdout: [] stderr: Error: No such network: bridge-20220221084933-6550 I0221 09:03:51.492727 450843 network_create.go:259] output of [docker network inspect bridge-20220221084933-6550]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: bridge-20220221084933-6550 ** /stderr ** I0221 09:03:51.492765 450843 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 09:03:51.534844 450843 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-5d96ab4d6b1a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:58:0b:cb:43}} I0221 09:03:51.535740 450843 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-3436ceea5013 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:ca:78:ad:42}} I0221 09:03:51.536644 450843 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc0009c01d0] misses:0} I0221 09:03:51.536695 450843 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}} I0221 09:03:51.536710 450843 network_create.go:106] attempt to create docker network bridge-20220221084933-6550 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ... I0221 09:03:51.536770 450843 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220221084933-6550 I0221 09:03:51.614280 450843 network_create.go:90] docker network bridge-20220221084933-6550 192.168.67.0/24 created I0221 09:03:51.614320 450843 kic.go:106] calculated static IP "192.168.67.2" for the "bridge-20220221084933-6550" container I0221 09:03:51.614378 450843 cli_runner.go:133] Run: docker ps -a --format {{.Names}} I0221 09:03:51.655290 450843 cli_runner.go:133] Run: docker volume create bridge-20220221084933-6550 --label name.minikube.sigs.k8s.io=bridge-20220221084933-6550 --label created_by.minikube.sigs.k8s.io=true I0221 09:03:51.695234 450843 oci.go:102] Successfully created a docker volume bridge-20220221084933-6550 I0221 09:03:51.695312 450843 cli_runner.go:133] Run: docker run --rm --name bridge-20220221084933-6550-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-20220221084933-6550 --entrypoint /usr/bin/test -v bridge-20220221084933-6550:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -d /var/lib I0221 09:03:52.317204 450843 oci.go:106] Successfully prepared a docker volume bridge-20220221084933-6550 I0221 09:03:52.317259 450843 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 09:03:52.317279 450843 kic.go:179] Starting extracting preloaded images to volume ... I0221 09:03:52.317369 450843 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v bridge-20220221084933-6550:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir I0221 09:03:58.456628 442801 out.go:203] - Configuring RBAC rules ... I0221 09:04:01.103747 442801 cni.go:93] Creating CNI manager for "bridge" I0221 09:04:01.108470 442801 out.go:176] * Configuring bridge CNI (Container Networking Interface) ... I0221 09:04:01.108564 442801 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d I0221 09:04:01.117961 442801 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes) I0221 09:04:01.137449 442801 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0221 09:04:01.137607 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:01.137732 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=9a4c39b14752448ee8b5da7f8bb397c2a16d9ea9 minikube.k8s.io/name=enable-default-cni-20220221084933-6550 minikube.k8s.io/updated_at=2022_02_21T09_04_01_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:01.437957 442801 ops.go:34] apiserver oom_adj: -16 I0221 09:04:01.438080 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:02.000868 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:01.030789 450843 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v bridge-20220221084933-6550:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir: (8.713376176s) I0221 09:04:01.030835 450843 kic.go:188] duration metric: took 8.713550 seconds to extract preloaded images to volume W0221 09:04:01.030877 450843 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted. W0221 09:04:01.030890 450843 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. I0221 09:04:01.030935 450843 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'" I0221 09:04:01.149423 450843 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname bridge-20220221084933-6550 --name bridge-20220221084933-6550 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-20220221084933-6550 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=bridge-20220221084933-6550 --network bridge-20220221084933-6550 --ip 192.168.67.2 --volume bridge-20220221084933-6550:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 I0221 09:04:01.619785 450843 cli_runner.go:133] Run: docker container inspect bridge-20220221084933-6550 --format={{.State.Running}} I0221 09:04:01.657132 450843 cli_runner.go:133] Run: docker container inspect bridge-20220221084933-6550 --format={{.State.Status}} I0221 09:04:01.691906 450843 cli_runner.go:133] Run: docker exec bridge-20220221084933-6550 stat /var/lib/dpkg/alternatives/iptables I0221 09:04:01.762525 450843 oci.go:281] the created container "bridge-20220221084933-6550" has a running status. I0221 09:04:01.762561 450843 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/bridge-20220221084933-6550/id_rsa... I0221 09:04:01.825039 450843 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/bridge-20220221084933-6550/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0221 09:04:01.921998 450843 cli_runner.go:133] Run: docker container inspect bridge-20220221084933-6550 --format={{.State.Status}} I0221 09:04:01.965911 450843 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0221 09:04:01.965932 450843 kic_runner.go:114] Args: [docker exec --privileged bridge-20220221084933-6550 chown docker:docker /home/docker/.ssh/authorized_keys] I0221 09:04:02.060550 450843 cli_runner.go:133] Run: docker container inspect bridge-20220221084933-6550 --format={{.State.Status}} I0221 09:04:02.102476 450843 machine.go:88] provisioning docker machine ... I0221 09:04:02.102515 450843 ubuntu.go:169] provisioning hostname "bridge-20220221084933-6550" I0221 09:04:02.102558 450843 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220221084933-6550 I0221 09:04:02.141339 450843 main.go:130] libmachine: Using SSH client type: native I0221 09:04:02.141561 450843 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49394 } I0221 09:04:02.141585 450843 main.go:130] libmachine: About to run SSH command: sudo hostname bridge-20220221084933-6550 && echo "bridge-20220221084933-6550" | sudo tee /etc/hostname I0221 09:04:02.276015 450843 main.go:130] libmachine: SSH cmd err, output: : bridge-20220221084933-6550 I0221 09:04:02.276101 450843 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220221084933-6550 I0221 09:04:02.310672 450843 main.go:130] libmachine: Using SSH client type: native I0221 09:04:02.310874 450843 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49394 } I0221 09:04:02.310899 450843 main.go:130] libmachine: About to run SSH command: if ! grep -xq '.*\sbridge-20220221084933-6550' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-20220221084933-6550/g' /etc/hosts; else echo '127.0.1.1 bridge-20220221084933-6550' | sudo tee -a /etc/hosts; fi fi I0221 09:04:02.435046 450843 main.go:130] libmachine: SSH cmd err, output: : I0221 09:04:02.435075 450843 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube} I0221 09:04:02.435113 450843 ubuntu.go:177] setting up certificates I0221 09:04:02.435120 450843 provision.go:83] configureAuth start I0221 09:04:02.435185 450843 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-20220221084933-6550 I0221 09:04:02.470021 450843 provision.go:138] copyHostCerts I0221 09:04:02.470092 450843 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem, removing ... I0221 09:04:02.470106 450843 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem I0221 09:04:02.470167 450843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem (1082 bytes) I0221 09:04:02.470239 450843 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem, removing ... I0221 09:04:02.470253 450843 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem I0221 09:04:02.470274 450843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem (1123 bytes) I0221 09:04:02.470357 450843 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem, removing ... I0221 09:04:02.470368 450843 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem I0221 09:04:02.470389 450843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem (1675 bytes) I0221 09:04:02.470433 450843 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem org=jenkins.bridge-20220221084933-6550 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube bridge-20220221084933-6550] I0221 09:04:02.642265 450843 provision.go:172] copyRemoteCerts I0221 09:04:02.642319 450843 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0221 09:04:02.642351 450843 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220221084933-6550 I0221 09:04:02.675755 450843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49394 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/bridge-20220221084933-6550/id_rsa Username:docker} I0221 09:04:02.762558 450843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes) I0221 09:04:02.781693 450843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes) I0221 09:04:02.799380 450843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I0221 09:04:02.817291 450843 provision.go:86] duration metric: configureAuth took 382.145126ms I0221 09:04:02.817321 450843 ubuntu.go:193] setting minikube options for container-runtime I0221 09:04:02.817512 450843 config.go:176] Loaded profile config "bridge-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:04:02.817595 450843 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220221084933-6550 I0221 09:04:02.851272 450843 main.go:130] libmachine: Using SSH client type: native I0221 09:04:02.851449 450843 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49394 } I0221 09:04:02.851469 450843 main.go:130] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0221 09:04:02.971079 450843 main.go:130] libmachine: SSH cmd err, output: : overlay I0221 09:04:02.971105 450843 ubuntu.go:71] root file system type: overlay I0221 09:04:02.971293 450843 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0221 09:04:02.971353 450843 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220221084933-6550 I0221 09:04:03.004341 450843 main.go:130] libmachine: Using SSH client type: native I0221 09:04:03.004526 450843 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49394 } I0221 09:04:03.004630 450843 main.go:130] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0221 09:04:03.136130 450843 main.go:130] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0221 09:04:03.136225 450843 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220221084933-6550 I0221 09:04:03.170767 450843 main.go:130] libmachine: Using SSH client type: native I0221 09:04:03.170945 450843 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49394 } I0221 09:04:03.170983 450843 main.go:130] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0221 09:04:03.838552 450843 main.go:130] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2021-12-13 11:43:42.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2022-02-21 09:04:03.129819641 +0000 @@ -1,30 +1,32 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com +BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target -Requires=docker.socket containerd.service +Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always - -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 +Restart=on-failure -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +34,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0221 09:04:03.838589 450843 machine.go:91] provisioned docker machine in 1.736085907s I0221 09:04:03.838598 450843 client.go:171] LocalClient.Create took 12.416525048s I0221 09:04:03.838614 450843 start.go:168] duration metric: libmachine.API.Create for "bridge-20220221084933-6550" took 12.416582656s I0221 09:04:03.838620 450843 start.go:267] post-start starting for "bridge-20220221084933-6550" (driver="docker") I0221 09:04:03.838625 450843 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0221 09:04:03.838687 450843 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0221 09:04:03.838740 450843 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220221084933-6550 I0221 09:04:03.871312 450843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49394 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/bridge-20220221084933-6550/id_rsa Username:docker} I0221 09:04:03.962963 450843 ssh_runner.go:195] Run: cat /etc/os-release I0221 09:04:03.965877 450843 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0221 09:04:03.965896 450843 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0221 09:04:03.965904 450843 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0221 09:04:03.965911 450843 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0221 09:04:03.965921 450843 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/addons for local assets ... I0221 09:04:03.965977 450843 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files for local assets ... I0221 09:04:03.966056 450843 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem -> 65502.pem in /etc/ssl/certs I0221 09:04:03.966143 450843 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs I0221 09:04:03.972869 450843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /etc/ssl/certs/65502.pem (1708 bytes) I0221 09:04:03.990511 450843 start.go:270] post-start completed in 151.880985ms I0221 09:04:03.990813 450843 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-20220221084933-6550 I0221 09:04:04.024103 450843 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/config.json ... I0221 09:04:04.024411 450843 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0221 09:04:04.024465 450843 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220221084933-6550 I0221 09:04:04.059852 450843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49394 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/bridge-20220221084933-6550/id_rsa Username:docker} I0221 09:04:04.143527 450843 start.go:129] duration metric: createHost completed in 12.724156873s I0221 09:04:04.143559 450843 start.go:80] releasing machines lock for "bridge-20220221084933-6550", held for 12.724325494s I0221 09:04:04.143657 450843 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-20220221084933-6550 I0221 09:04:04.176805 450843 ssh_runner.go:195] Run: systemctl --version I0221 09:04:04.176859 450843 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220221084933-6550 I0221 09:04:04.176889 450843 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/ I0221 09:04:04.176938 450843 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220221084933-6550 I0221 09:04:04.211410 450843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49394 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/bridge-20220221084933-6550/id_rsa Username:docker} I0221 09:04:04.211622 450843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49394 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/bridge-20220221084933-6550/id_rsa Username:docker} I0221 09:04:04.435927 450843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0221 09:04:04.445594 450843 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 09:04:04.456463 450843 cruntime.go:272] skipping containerd shutdown because we are bound to it I0221 09:04:04.456545 450843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0221 09:04:04.466270 450843 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0221 09:04:04.479108 450843 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0221 09:04:04.570245 450843 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0221 09:04:04.652126 450843 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 09:04:04.661792 450843 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0221 09:04:04.750226 450843 ssh_runner.go:195] Run: sudo systemctl start docker I0221 09:04:04.761194 450843 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 09:04:04.800783 450843 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 09:04:04.843188 450843 out.go:203] * Preparing Kubernetes v1.23.4 on Docker 20.10.12 ... I0221 09:04:04.843269 450843 cli_runner.go:133] Run: docker network inspect bridge-20220221084933-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 09:04:04.875779 450843 ssh_runner.go:195] Run: grep 192.168.67.1 host.minikube.internal$ /etc/hosts I0221 09:04:04.879091 450843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 09:04:04.890584 450843 out.go:176] - kubelet.housekeeping-interval=5m I0221 09:04:04.890666 450843 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 09:04:04.890718 450843 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0221 09:04:04.924400 450843 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.4 k8s.gcr.io/kube-proxy:v1.23.4 k8s.gcr.io/kube-controller-manager:v1.23.4 k8s.gcr.io/kube-scheduler:v1.23.4 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0221 09:04:04.924431 450843 docker.go:537] Images already preloaded, skipping extraction I0221 09:04:04.924490 450843 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0221 09:04:04.960777 450843 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.4 k8s.gcr.io/kube-proxy:v1.23.4 k8s.gcr.io/kube-scheduler:v1.23.4 k8s.gcr.io/kube-controller-manager:v1.23.4 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0221 09:04:04.960806 450843 cache_images.go:84] Images are preloaded, skipping loading I0221 09:04:04.960852 450843 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0221 09:04:05.053659 450843 cni.go:93] Creating CNI manager for "bridge" I0221 09:04:05.053700 450843 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0221 09:04:05.053725 450843 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-20220221084933-6550 NodeName:bridge-20220221084933-6550 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0221 09:04:05.053913 450843 kubeadm.go:162] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.67.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "bridge-20220221084933-6550" kubeletExtraArgs: node-ip: 192.168.67.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.67.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.23.4 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0221 09:04:05.054031 450843 kubeadm.go:936] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.23.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=bridge-20220221084933-6550 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 [Install] config: {KubernetesVersion:v1.23.4 ClusterName:bridge-20220221084933-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} I0221 09:04:05.054097 450843 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.4 I0221 09:04:05.061827 450843 binaries.go:44] Found k8s binaries, skipping transfer I0221 09:04:05.061904 450843 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0221 09:04:05.069705 450843 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (400 bytes) I0221 09:04:05.083281 450843 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0221 09:04:05.096800 450843 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes) I0221 09:04:05.111629 450843 ssh_runner.go:195] Run: grep 192.168.67.2 control-plane.minikube.internal$ /etc/hosts I0221 09:04:05.114792 450843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 09:04:05.124307 450843 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550 for IP: 192.168.67.2 I0221 09:04:05.124419 450843 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key I0221 09:04:05.124463 450843 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key I0221 09:04:05.124507 450843 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.key I0221 09:04:05.124520 450843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt with IP's: [] I0221 09:04:05.319892 450843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt ... I0221 09:04:05.319928 450843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: {Name:mk8cbb46271d42fb75fda4f65da2d7262d06ec86 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:04:05.320141 450843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.key ... I0221 09:04:05.320159 450843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.key: {Name:mkd13d656a2820a92f6d5b9d3905007effd80085 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:04:05.320271 450843 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/apiserver.key.c7fa3a9e I0221 09:04:05.320288 450843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1] I0221 09:04:05.618739 450843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/apiserver.crt.c7fa3a9e ... I0221 09:04:05.618772 450843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/apiserver.crt.c7fa3a9e: {Name:mka92eaa59d437c0a58d327ef573ac021dee9683 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:04:05.618979 450843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/apiserver.key.c7fa3a9e ... I0221 09:04:05.619039 450843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/apiserver.key.c7fa3a9e: {Name:mk723313c8c1c15643497cb4692d37cb78d49b5a Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:04:05.619161 450843 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/apiserver.crt I0221 09:04:05.619229 450843 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/apiserver.key I0221 09:04:05.619275 450843 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/proxy-client.key I0221 09:04:05.619289 450843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/proxy-client.crt with IP's: [] I0221 09:04:05.755263 450843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/proxy-client.crt ... I0221 09:04:05.755298 450843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/proxy-client.crt: {Name:mk0223fd1865dc442f565c7049baeaab60cc34f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:04:05.755491 450843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/proxy-client.key ... I0221 09:04:05.755509 450843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/proxy-client.key: {Name:mkbe785cfdb21e9c0948d8fa3b523861363916c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:04:05.755667 450843 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem (1338 bytes) W0221 09:04:05.755702 450843 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550_empty.pem, impossibly tiny 0 bytes I0221 09:04:05.755716 450843 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem (1679 bytes) I0221 09:04:05.755740 450843 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem (1082 bytes) I0221 09:04:05.755766 450843 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem (1123 bytes) I0221 09:04:05.755787 450843 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem (1675 bytes) I0221 09:04:05.755825 450843 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem (1708 bytes) I0221 09:04:05.756637 450843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0221 09:04:05.775376 450843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes) I0221 09:04:05.793336 450843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0221 09:04:05.810830 450843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I0221 09:04:05.828582 450843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0221 09:04:05.846167 450843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0221 09:04:05.864214 450843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0221 09:04:05.882412 450843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0221 09:04:05.900011 450843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /usr/share/ca-certificates/65502.pem (1708 bytes) I0221 09:04:05.919790 450843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0221 09:04:05.939351 450843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem --> /usr/share/ca-certificates/6550.pem (1338 bytes) I0221 09:04:05.957173 450843 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0221 09:04:05.970053 450843 ssh_runner.go:195] Run: openssl version I0221 09:04:05.975134 450843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65502.pem && ln -fs /usr/share/ca-certificates/65502.pem /etc/ssl/certs/65502.pem" I0221 09:04:05.982704 450843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65502.pem I0221 09:04:05.986765 450843 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Feb 21 08:30 /usr/share/ca-certificates/65502.pem I0221 09:04:05.986816 450843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65502.pem I0221 09:04:05.991765 450843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65502.pem /etc/ssl/certs/3ec20f2e.0" I0221 09:04:05.999292 450843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0221 09:04:06.006814 450843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0221 09:04:06.010099 450843 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb 21 08:26 /usr/share/ca-certificates/minikubeCA.pem I0221 09:04:06.010146 450843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0221 09:04:06.015300 450843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0221 09:04:06.023344 450843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6550.pem && ln -fs /usr/share/ca-certificates/6550.pem /etc/ssl/certs/6550.pem" I0221 09:04:06.031283 450843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6550.pem I0221 09:04:06.034681 450843 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Feb 21 08:30 /usr/share/ca-certificates/6550.pem I0221 09:04:06.034729 450843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6550.pem I0221 09:04:06.039625 450843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6550.pem /etc/ssl/certs/51391683.0" I0221 09:04:06.048486 450843 kubeadm.go:391] StartCluster: {Name:bridge-20220221084933-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:bridge-20220221084933-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 09:04:06.048635 450843 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0221 09:04:06.081455 450843 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0221 09:04:06.088719 450843 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0221 09:04:06.095804 450843 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver I0221 09:04:06.095857 450843 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0221 09:04:06.102652 450843 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0221 09:04:06.102690 450843 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0221 09:04:02.500237 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:03.000849 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:03.500336 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:04.000890 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:04.500907 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:05.000591 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:05.501215 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:06.001079 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:06.500383 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:07.000795 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:06.625853 450843 out.go:203] - Generating certificates and keys ... I0221 09:04:09.076334 450843 out.go:203] - Booting up control plane ... I0221 09:04:07.500614 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:08.000920 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:08.501012 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:09.000860 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:09.500890 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:10.000642 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:10.500378 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:11.000503 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:11.501273 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:12.000626 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:12.501284 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:13.000940 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:13.500277 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:13.714438 442801 kubeadm.go:1020] duration metric: took 12.576875325s to wait for elevateKubeSystemPrivileges. I0221 09:04:13.714474 442801 kubeadm.go:393] StartCluster complete in 27.983669732s I0221 09:04:13.714495 442801 settings.go:142] acquiring lock: {Name:mk4400923ef35d7d80e21aa000bc7683aef0fb09 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:04:13.714612 442801 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 09:04:13.716690 442801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig: {Name:mkbaf7b3f4ffbe8a9d57707d423380523fa909e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:} W0221 09:04:13.743696 442801 kapi.go:233] failed rescaling deployment, will retry: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again I0221 09:04:14.746909 442801 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "enable-default-cni-20220221084933-6550" rescaled to 1 I0221 09:04:14.747037 442801 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true} I0221 09:04:14.748728 442801 out.go:176] * Verifying Kubernetes components... I0221 09:04:14.747075 442801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0221 09:04:14.747094 442801 addons.go:415] enableAddons start: toEnable=map[], additional=[] I0221 09:04:14.748929 442801 addons.go:65] Setting storage-provisioner=true in profile "enable-default-cni-20220221084933-6550" I0221 09:04:14.748952 442801 addons.go:153] Setting addon storage-provisioner=true in "enable-default-cni-20220221084933-6550" W0221 09:04:14.748963 442801 addons.go:165] addon storage-provisioner should already be in state true I0221 09:04:14.748992 442801 host.go:66] Checking if "enable-default-cni-20220221084933-6550" exists ... I0221 09:04:14.749670 442801 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220221084933-6550 --format={{.State.Status}} I0221 09:04:14.747302 442801 config.go:176] Loaded profile config "enable-default-cni-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:04:14.748793 442801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0221 09:04:14.749834 442801 addons.go:65] Setting default-storageclass=true in profile "enable-default-cni-20220221084933-6550" I0221 09:04:14.749850 442801 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-20220221084933-6550" I0221 09:04:14.750123 442801 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220221084933-6550 --format={{.State.Status}} I0221 09:04:14.812896 442801 addons.go:153] Setting addon default-storageclass=true in "enable-default-cni-20220221084933-6550" W0221 09:04:14.812923 442801 addons.go:165] addon default-storageclass should already be in state true I0221 09:04:14.812954 442801 host.go:66] Checking if "enable-default-cni-20220221084933-6550" exists ... I0221 09:04:14.815855 442801 out.go:176] - Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0221 09:04:14.816096 442801 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml I0221 09:04:14.816112 442801 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0221 09:04:14.816168 442801 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220221084933-6550 I0221 09:04:14.813484 442801 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220221084933-6550 --format={{.State.Status}} I0221 09:04:14.854856 442801 node_ready.go:35] waiting up to 5m0s for node "enable-default-cni-20220221084933-6550" to be "Ready" ... I0221 09:04:14.855914 442801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.58.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0221 09:04:14.860755 442801 node_ready.go:49] node "enable-default-cni-20220221084933-6550" has status "Ready":"True" I0221 09:04:14.860778 442801 node_ready.go:38] duration metric: took 5.885081ms waiting for node "enable-default-cni-20220221084933-6550" to be "Ready" ... I0221 09:04:14.860789 442801 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 09:04:14.866764 442801 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml I0221 09:04:14.866793 442801 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0221 09:04:14.866843 442801 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220221084933-6550 I0221 09:04:14.873907 442801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49389 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/enable-default-cni-20220221084933-6550/id_rsa Username:docker} I0221 09:04:14.877501 442801 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-4pdmv" in "kube-system" namespace to be "Ready" ... I0221 09:04:14.914653 442801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49389 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/enable-default-cni-20220221084933-6550/id_rsa Username:docker} I0221 09:04:15.119543 442801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0221 09:04:15.120676 442801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0221 09:04:15.920017 442801 pod_ready.go:92] pod "coredns-64897985d-4pdmv" in "kube-system" namespace has status "Ready":"True" I0221 09:04:15.920046 442801 pod_ready.go:81] duration metric: took 1.042510787s waiting for pod "coredns-64897985d-4pdmv" in "kube-system" namespace to be "Ready" ... I0221 09:04:15.920062 442801 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-mr75l" in "kube-system" namespace to be "Ready" ... I0221 09:04:16.209420 442801 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.58.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.35347231s) I0221 09:04:16.209452 442801 start.go:777] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS I0221 09:04:16.210214 442801 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.090624607s) I0221 09:04:16.251854 442801 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.131142214s) I0221 09:04:16.620063 450843 out.go:203] - Configuring RBAC rules ... I0221 09:04:17.034273 450843 cni.go:93] Creating CNI manager for "bridge" I0221 09:04:16.253869 442801 out.go:176] * Enabled addons: default-storageclass, storage-provisioner I0221 09:04:16.253898 442801 addons.go:417] enableAddons completed in 1.506809944s I0221 09:04:17.036370 450843 out.go:176] * Configuring bridge CNI (Container Networking Interface) ... I0221 09:04:17.036445 450843 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d I0221 09:04:17.045026 450843 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes) I0221 09:04:17.102662 450843 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0221 09:04:17.102740 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=9a4c39b14752448ee8b5da7f8bb397c2a16d9ea9 minikube.k8s.io/name=bridge-20220221084933-6550 minikube.k8s.io/updated_at=2022_02_21T09_04_17_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:17.102740 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:17.205474 450843 ops.go:34] apiserver oom_adj: -16 I0221 09:04:17.548076 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:18.144207 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:18.644693 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:19.144213 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:19.644290 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:20.144831 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:20.644017 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:17.934661 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:04:19.937394 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:04:21.144510 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:21.644204 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:22.144205 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:22.644156 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:23.144049 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:23.644213 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:24.144756 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:24.644658 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:25.143988 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:25.643976 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:22.434245 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:04:24.935381 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:04:26.143995 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:26.644758 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:27.144380 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:27.643849 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:28.144209 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:28.644175 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:29.144225 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:29.644097 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:29.699085 450843 kubeadm.go:1020] duration metric: took 12.596412472s to wait for elevateKubeSystemPrivileges. I0221 09:04:29.699118 450843 kubeadm.go:393] StartCluster complete in 23.650643743s I0221 09:04:29.699139 450843 settings.go:142] acquiring lock: {Name:mk4400923ef35d7d80e21aa000bc7683aef0fb09 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:04:29.699242 450843 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 09:04:29.700907 450843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig: {Name:mkbaf7b3f4ffbe8a9d57707d423380523fa909e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:04:30.220009 450843 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "bridge-20220221084933-6550" rescaled to 1 I0221 09:04:30.220105 450843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0221 09:04:30.220120 450843 addons.go:415] enableAddons start: toEnable=map[], additional=[] I0221 09:04:30.220167 450843 addons.go:65] Setting storage-provisioner=true in profile "bridge-20220221084933-6550" I0221 09:04:30.220185 450843 addons.go:153] Setting addon storage-provisioner=true in "bridge-20220221084933-6550" W0221 09:04:30.220199 450843 addons.go:165] addon storage-provisioner should already be in state true I0221 09:04:30.220225 450843 host.go:66] Checking if "bridge-20220221084933-6550" exists ... I0221 09:04:30.220418 450843 config.go:176] Loaded profile config "bridge-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:04:30.220469 450843 addons.go:65] Setting default-storageclass=true in profile "bridge-20220221084933-6550" I0221 09:04:30.220481 450843 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-20220221084933-6550" I0221 09:04:30.220735 450843 cli_runner.go:133] Run: docker container inspect bridge-20220221084933-6550 --format={{.State.Status}} I0221 09:04:30.220735 450843 cli_runner.go:133] Run: docker container inspect bridge-20220221084933-6550 --format={{.State.Status}} I0221 09:04:30.220094 450843 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true} I0221 09:04:30.223111 450843 out.go:176] * Verifying Kubernetes components... I0221 09:04:30.223190 450843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0221 09:04:30.266915 450843 out.go:176] - Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0221 09:04:30.267101 450843 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml I0221 09:04:30.267118 450843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0221 09:04:30.267155 450843 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220221084933-6550 I0221 09:04:30.301955 450843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49394 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/bridge-20220221084933-6550/id_rsa Username:docker} I0221 09:04:30.305286 450843 addons.go:153] Setting addon default-storageclass=true in "bridge-20220221084933-6550" W0221 09:04:30.305320 450843 addons.go:165] addon default-storageclass should already be in state true I0221 09:04:30.305354 450843 host.go:66] Checking if "bridge-20220221084933-6550" exists ... I0221 09:04:30.305864 450843 cli_runner.go:133] Run: docker container inspect bridge-20220221084933-6550 --format={{.State.Status}} I0221 09:04:30.354025 450843 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml I0221 09:04:30.354053 450843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0221 09:04:30.354110 450843 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220221084933-6550 I0221 09:04:30.388147 450843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49394 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/bridge-20220221084933-6550/id_rsa Username:docker} I0221 09:04:30.423881 450843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.67.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0221 09:04:30.427457 450843 node_ready.go:35] waiting up to 5m0s for node "bridge-20220221084933-6550" to be "Ready" ... I0221 09:04:30.432136 450843 node_ready.go:49] node "bridge-20220221084933-6550" has status "Ready":"True" I0221 09:04:30.432196 450843 node_ready.go:38] duration metric: took 4.664633ms waiting for node "bridge-20220221084933-6550" to be "Ready" ... I0221 09:04:30.432212 450843 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 09:04:30.441872 450843 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-7jshp" in "kube-system" namespace to be "Ready" ... I0221 09:04:30.521878 450843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0221 09:04:30.527474 450843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0221 09:04:31.733720 450843 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.67.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.309793379s) I0221 09:04:31.733768 450843 start.go:777] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS I0221 09:04:31.839340 450843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.31737173s) I0221 09:04:31.839463 450843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.311959107s) I0221 09:04:27.435159 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:04:29.936608 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:04:31.841123 450843 out.go:176] * Enabled addons: storage-provisioner, default-storageclass I0221 09:04:31.841211 450843 addons.go:417] enableAddons completed in 1.62109441s I0221 09:04:32.455774 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:04:34.955583 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:04:32.435115 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:04:34.935474 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:04:36.935912 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:04:36.956049 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:04:39.456093 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:04:38.936067 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:04:41.434307 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:04:41.456541 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:04:43.955832 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:04:43.436111 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:04:45.934096 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:04:46.456741 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:04:48.956151 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:04:47.935259 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:04:50.434755 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:04:51.455727 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:04:53.955094 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:04:55.955440 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:04:52.934820 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:04:54.937852 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:04:57.956232 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:04:59.956365 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:04:57.435035 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:04:59.934049 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:01.934530 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:02.455268 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:04.455740 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:03.935138 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:06.434776 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:06.456348 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:08.956186 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:08.935346 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:10.935572 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:11.455698 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:13.455791 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:15.456360 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:13.433982 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:15.434465 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:17.956930 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:20.455598 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:17.434688 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:19.935355 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:22.955384 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:24.956304 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:22.435376 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:24.935574 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:27.456347 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:29.457022 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:27.434317 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:29.435822 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:31.935306 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:31.956186 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:34.455982 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:34.434660 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:36.435252 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:36.956407 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:38.956481 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:38.935022 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:41.434276 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:41.455912 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:43.955728 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:43.434849 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:45.435045 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:46.455771 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:48.456125 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:50.955535 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:47.435419 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:49.934762 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:51.935246 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:53.455757 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:55.955989 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:54.435110 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:56.934967 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:58.455600 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:00.456412 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:58.935863 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:01.435228 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:02.956159 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:05.456333 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:03.435332 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:05.934148 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:07.955776 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:10.456294 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:07.934512 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:09.935280 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:12.955819 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:15.456281 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:12.434971 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:14.935453 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:16.935799 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:17.955309 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:19.955885 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:19.434977 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:21.934868 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:21.955949 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:24.456143 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:23.935521 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:26.435200 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:26.955294 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:29.455254 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:28.934695 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:30.935605 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:31.955457 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:33.955912 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:35.956083 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:33.434314 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:35.435174 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:38.456121 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:40.456245 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:37.935547 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:40.434317 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:42.955236 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:44.955806 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:42.434719 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:44.435481 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:46.934690 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:47.455860 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:49.457435 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:49.434637 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:51.935545 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:51.955645 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:53.955837 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:55.955962 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:54.434212 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:56.435042 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:58.456029 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:00.956576 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:58.934726 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:00.936082 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:03.455781 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:05.955666 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:02.936505 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:05.435785 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:07.955926 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:10.456454 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:07.934506 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:09.934869 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:11.935083 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:12.956960 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:15.456313 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:13.935264 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:16.434803 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" * * ==> Docker <== * -- Logs begin at Mon 2022-02-21 08:57:00 UTC, end at Mon 2022-02-21 09:07:21 UTC. -- Feb 21 08:57:02 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T08:57:02.666768688Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 21 08:57:02 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T08:57:02.666795906Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 21 08:57:02 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T08:57:02.666814149Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Feb 21 08:57:02 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T08:57:02.666822586Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 21 08:57:02 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T08:57:02.670743564Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Feb 21 08:57:02 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T08:57:02.676732207Z" level=warning msg="Your kernel does not support CPU realtime scheduler" Feb 21 08:57:02 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T08:57:02.676756014Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 21 08:57:02 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T08:57:02.676761700Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 21 08:57:02 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T08:57:02.676921956Z" level=info msg="Loading containers: start." Feb 21 08:57:02 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T08:57:02.768531671Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 21 08:57:02 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T08:57:02.805140931Z" level=info msg="Loading containers: done." Feb 21 08:57:02 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T08:57:02.825275313Z" level=info msg="Docker daemon" commit=459d0df graphdriver(s)=overlay2 version=20.10.12 Feb 21 08:57:02 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T08:57:02.825342305Z" level=info msg="Daemon has completed initialization" Feb 21 08:57:02 auto-20220221084933-6550 systemd[1]: Started Docker Application Container Engine. Feb 21 08:57:02 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T08:57:02.844491195Z" level=info msg="API listen on [::]:2376" Feb 21 08:57:02 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T08:57:02.850635107Z" level=info msg="API listen on /var/run/docker.sock" Feb 21 08:57:43 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T08:57:43.342083627Z" level=info msg="ignoring event" container=0c459eb8fed84d243a28367e4c6028d00b83be6a1b9ceb50262498a6589c186c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 08:57:43 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T08:57:43.407322662Z" level=info msg="ignoring event" container=f47bad55ea0449b1b8d785312c064d318e410699acf2b083fa64672b4050538d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 08:58:04 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T08:58:04.588382338Z" level=info msg="ignoring event" container=70f6c474ffc9d742d5078efd920a71d49d8b5f63e6ef155b915f2b7be6a7b31a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 08:58:35 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T08:58:35.573240058Z" level=info msg="ignoring event" container=fc5f64a664c235a0ed09411bff0370c4cb20ea225e43d0dca8547983013e4b46 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 08:59:17 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T08:59:17.690530130Z" level=info msg="ignoring event" container=28664154f0a61332a8c7e00f53457bc0cd85d7502285b9b6f234a56fe501be70 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:00:11 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T09:00:11.696415101Z" level=info msg="ignoring event" container=197c1336a22eab95236acd198fce43de13c1b0584fa184af45e5e69609ade3d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:01:30 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T09:01:30.689070641Z" level=info msg="ignoring event" container=1cd0b722c1ad179fbe04eec47fa9672f60c6ce42361d20956d75d80fafb850cc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:28 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T09:03:28.729961272Z" level=info msg="ignoring event" container=eb7ea3e7bcfd9ed6e4cfecafe560ef806c60bf5af7c2673497a39a09f6455dac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:06:43 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T09:06:43.681981932Z" level=info msg="ignoring event" container=88e3a5d7acafa1029dcb1e4d17a83105230bce1b9394bc49b9a690cef8400e2b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 88e3a5d7acafa 6e38f40d628db About a minute ago Exited storage-provisioner 6 effeeb1480903 b7624aca6f588 k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 5 minutes ago Running dnsutils 0 00e602440bce3 9ec110d5717f1 a4ca41631cc7a 9 minutes ago Running coredns 0 42d636d8f7715 76924ebff8388 2114245ec4d6b 9 minutes ago Running kube-proxy 0 d10927a13c2b7 b23ee2bbc19da 25f8c7f3da61c 10 minutes ago Running etcd 0 d92c7b63f2668 0bb1b94ca5a9e 25444908517a5 10 minutes ago Running kube-controller-manager 0 aca06434b9eef c78588822ac6e aceacb6244f9f 10 minutes ago Running kube-scheduler 0 7b143332b596c ee44803ab83a5 62930710c9634 10 minutes ago Running kube-apiserver 0 31a5d6a981fd8 * * ==> coredns [9ec110d5717f] <== * [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" * * ==> describe nodes <== * Name: auto-20220221084933-6550 Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=auto-20220221084933-6550 kubernetes.io/os=linux minikube.k8s.io/commit=9a4c39b14752448ee8b5da7f8bb397c2a16d9ea9 minikube.k8s.io/name=auto-20220221084933-6550 minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2022_02_21T08_57_18_0700 minikube.k8s.io/version=v1.25.1 node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 21 Feb 2022 08:57:15 +0000 Taints: Unschedulable: false Lease: HolderIdentity: auto-20220221084933-6550 AcquireTime: RenewTime: Mon, 21 Feb 2022 09:07:20 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Mon, 21 Feb 2022 09:02:24 +0000 Mon, 21 Feb 2022 08:57:11 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Mon, 21 Feb 2022 09:02:24 +0000 Mon, 21 Feb 2022 08:57:11 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Mon, 21 Feb 2022 09:02:24 +0000 Mon, 21 Feb 2022 08:57:11 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Mon, 21 Feb 2022 09:02:24 +0000 Mon, 21 Feb 2022 08:57:28 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.76.2 Hostname: auto-20220221084933-6550 Capacity: cpu: 8 ephemeral-storage: 304695084Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32874648Ki pods: 110 Allocatable: cpu: 8 ephemeral-storage: 304695084Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32874648Ki pods: 110 System Info: Machine ID: b6a262faae404a5db719705fd34b5c8b System UUID: 6245a238-b599-4ae4-881d-541b5f730f40 Boot ID: 36f9c729-2a96-4807-bb74-314dc2113999 Kernel Version: 5.11.0-1029-gcp OS Image: Ubuntu 20.04.2 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.12 Kubelet Version: v1.23.4 Kube-Proxy Version: v1.23.4 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (8 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- default netcat-668db85669-v8bk5 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m35s kube-system coredns-64897985d-rg6k7 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 9m50s kube-system etcd-auto-20220221084933-6550 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 10m kube-system kube-apiserver-auto-20220221084933-6550 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 10m kube-system kube-controller-manager-auto-20220221084933-6550 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 10m kube-system kube-proxy-j6t4r 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9m50s kube-system kube-scheduler-auto-20220221084933-6550 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 10m kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9m48s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 750m (9%!)(MISSING) 0 (0%!)(MISSING) memory 170Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 9m48s kube-proxy Normal NodeHasSufficientMemory 10m kubelet Node auto-20220221084933-6550 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 10m kubelet Node auto-20220221084933-6550 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 10m kubelet Node auto-20220221084933-6550 status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 10m kubelet Updated Node Allocatable limit across pods Normal Starting 10m kubelet Starting kubelet. Normal NodeReady 9m53s kubelet Node auto-20220221084933-6550 status is now: NodeReady * * ==> dmesg <== * [ +0.000008] ll header: 00000000: 02 42 58 0b cb 43 02 42 c0 a8 31 02 08 00 [ +11.606902] IPv4: martian source 10.244.0.223 from 10.244.0.3, on dev br-5d96ab4d6b1a [ +0.000006] ll header: 00000000: 02 42 58 0b cb 43 02 42 c0 a8 31 02 08 00 [ +4.995903] IPv4: martian source 10.244.0.223 from 10.244.0.3, on dev br-5d96ab4d6b1a [ +0.000008] ll header: 00000000: 02 42 58 0b cb 43 02 42 c0 a8 31 02 08 00 [ +4.999615] IPv4: martian source 10.244.0.223 from 10.244.0.3, on dev br-5d96ab4d6b1a [ +0.000007] ll header: 00000000: 02 42 58 0b cb 43 02 42 c0 a8 31 02 08 00 [ +11.726696] IPv4: martian source 10.244.0.223 from 10.244.0.3, on dev br-5d96ab4d6b1a [ +0.000006] ll header: 00000000: 02 42 58 0b cb 43 02 42 c0 a8 31 02 08 00 [ +4.996095] IPv4: martian source 10.244.0.223 from 10.244.0.3, on dev br-5d96ab4d6b1a [ +0.000007] ll header: 00000000: 02 42 58 0b cb 43 02 42 c0 a8 31 02 08 00 [ +4.999665] IPv4: martian source 10.244.0.223 from 10.244.0.3, on dev br-5d96ab4d6b1a [ +0.000007] ll header: 00000000: 02 42 58 0b cb 43 02 42 c0 a8 31 02 08 00 [Feb21 09:06] IPv4: martian source 10.244.0.223 from 10.244.0.3, on dev br-5d96ab4d6b1a [ +0.000007] ll header: 00000000: 02 42 58 0b cb 43 02 42 c0 a8 31 02 08 00 [ +4.995939] IPv4: martian source 10.244.0.223 from 10.244.0.3, on dev br-5d96ab4d6b1a [ +0.000007] ll header: 00000000: 02 42 58 0b cb 43 02 42 c0 a8 31 02 08 00 [ +5.003671] IPv4: martian source 10.244.0.223 from 10.244.0.3, on dev br-5d96ab4d6b1a [ +0.000008] ll header: 00000000: 02 42 58 0b cb 43 02 42 c0 a8 31 02 08 00 [ +25.459126] IPv4: martian source 10.244.0.223 from 10.244.0.3, on dev br-5d96ab4d6b1a [ +0.000007] ll header: 00000000: 02 42 58 0b cb 43 02 42 c0 a8 31 02 08 00 [ +4.998672] IPv4: martian source 10.244.0.223 from 10.244.0.3, on dev br-5d96ab4d6b1a [ +0.000006] ll header: 00000000: 02 42 58 0b cb 43 02 42 c0 a8 31 02 08 00 [ +4.999618] IPv4: martian source 10.244.0.223 from 10.244.0.3, on dev br-5d96ab4d6b1a [ +0.000007] ll header: 00000000: 02 42 58 0b cb 43 02 42 c0 a8 31 02 08 00 * * ==> etcd [b23ee2bbc19d] <== * {"level":"info","ts":"2022-02-21T08:57:12.406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"} {"level":"info","ts":"2022-02-21T08:57:12.406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"} {"level":"info","ts":"2022-02-21T08:57:12.406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"} {"level":"info","ts":"2022-02-21T08:57:12.406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"} {"level":"info","ts":"2022-02-21T08:57:12.406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"} {"level":"info","ts":"2022-02-21T08:57:12.407Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T08:57:12.407Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T08:57:12.407Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T08:57:12.407Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T08:57:12.407Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:auto-20220221084933-6550 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"} {"level":"info","ts":"2022-02-21T08:57:12.407Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2022-02-21T08:57:12.407Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2022-02-21T08:57:12.408Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"} {"level":"info","ts":"2022-02-21T08:57:12.408Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"} {"level":"info","ts":"2022-02-21T08:57:12.409Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"} {"level":"info","ts":"2022-02-21T08:57:12.409Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"} {"level":"info","ts":"2022-02-21T09:03:36.569Z","caller":"traceutil/trace.go:171","msg":"trace[55828461] transaction","detail":"{read_only:false; response_revision:644; number_of_response:1; }","duration":"116.088575ms","start":"2022-02-21T09:03:36.453Z","end":"2022-02-21T09:03:36.569Z","steps":["trace[55828461] 'process raft request' (duration: 113.749116ms)"],"step_count":1} {"level":"info","ts":"2022-02-21T09:03:56.662Z","caller":"traceutil/trace.go:171","msg":"trace[1655245063] transaction","detail":"{read_only:false; response_revision:649; number_of_response:1; }","duration":"111.010424ms","start":"2022-02-21T09:03:56.551Z","end":"2022-02-21T09:03:56.662Z","steps":["trace[1655245063] 'process raft request' (duration: 13.130369ms)","trace[1655245063] 'compare' (duration: 97.781597ms)"],"step_count":2} {"level":"warn","ts":"2022-02-21T09:03:56.662Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"193.098376ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:422"} {"level":"info","ts":"2022-02-21T09:03:56.662Z","caller":"traceutil/trace.go:171","msg":"trace[1374387149] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:648; }","duration":"193.270999ms","start":"2022-02-21T09:03:56.469Z","end":"2022-02-21T09:03:56.662Z","steps":["trace[1374387149] 'agreement among raft nodes before linearized reading' (duration: 95.291053ms)","trace[1374387149] 'range keys from in-memory index tree' (duration: 97.769916ms)"],"step_count":2} {"level":"info","ts":"2022-02-21T09:03:56.779Z","caller":"traceutil/trace.go:171","msg":"trace[1954821084] linearizableReadLoop","detail":"{readStateIndex:744; appliedIndex:744; }","duration":"114.80934ms","start":"2022-02-21T09:03:56.664Z","end":"2022-02-21T09:03:56.779Z","steps":["trace[1954821084] 'read index received' (duration: 114.79616ms)","trace[1954821084] 'applied index is now lower than readState.Index' (duration: 11.307µs)"],"step_count":2} {"level":"warn","ts":"2022-02-21T09:03:56.878Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"214.207147ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" ","response":"range_response_count:1 size:133"} {"level":"info","ts":"2022-02-21T09:03:56.878Z","caller":"traceutil/trace.go:171","msg":"trace[679182698] range","detail":"{range_begin:/registry/masterleases/; range_end:/registry/masterleases0; response_count:1; response_revision:649; }","duration":"214.301324ms","start":"2022-02-21T09:03:56.664Z","end":"2022-02-21T09:03:56.878Z","steps":["trace[679182698] 'agreement among raft nodes before linearized reading' (duration: 114.936027ms)","trace[679182698] 'range keys from in-memory index tree' (duration: 99.227689ms)"],"step_count":2} {"level":"info","ts":"2022-02-21T09:07:12.424Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":621} {"level":"info","ts":"2022-02-21T09:07:12.425Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":621,"took":"587.592µs"} * * ==> kernel <== * 09:07:21 up 49 min, 0 users, load average: 0.88, 2.77, 3.13 Linux auto-20220221084933-6550 5.11.0-1029-gcp #33~20.04.3-Ubuntu SMP Tue Jan 18 12:03:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.2 LTS" * * ==> kube-apiserver [ee44803ab83a] <== * I0221 08:57:14.932266 1 shared_informer.go:247] Caches are synced for node_authorizer I0221 08:57:14.947237 1 apf_controller.go:322] Running API Priority and Fairness config worker I0221 08:57:14.951403 1 cache.go:39] Caches are synced for AvailableConditionController controller I0221 08:57:14.952540 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller I0221 08:57:14.953486 1 shared_informer.go:247] Caches are synced for crd-autoregister I0221 08:57:15.845870 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0221 08:57:15.852269 1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000 I0221 08:57:15.854707 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0221 08:57:15.856109 1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000 I0221 08:57:15.856133 1 storage_scheduling.go:109] all system priority classes are created successfully or already exist. I0221 08:57:16.325532 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0221 08:57:16.364060 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0221 08:57:16.436271 1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1] W0221 08:57:16.441781 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2] I0221 08:57:16.442808 1 controller.go:611] quota admission added evaluator for: endpoints I0221 08:57:16.446390 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io I0221 08:57:16.983217 1 controller.go:611] quota admission added evaluator for: serviceaccounts I0221 08:57:18.042389 1 controller.go:611] quota admission added evaluator for: deployments.apps I0221 08:57:18.049860 1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10] I0221 08:57:18.062188 1 controller.go:611] quota admission added evaluator for: daemonsets.apps I0221 08:57:18.344341 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io I0221 08:57:30.622969 1 controller.go:611] quota admission added evaluator for: replicasets.apps I0221 08:57:31.608959 1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps I0221 08:57:32.936691 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io I0221 09:01:46.059129 1 alloc.go:329] "allocated clusterIPs" service="default/netcat" clusterIPs=map[IPv4:10.110.232.80] * * ==> kube-controller-manager [0bb1b94ca5a9] <== * I0221 08:57:30.727273 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator I0221 08:57:30.727289 1 shared_informer.go:247] Caches are synced for cidrallocator I0221 08:57:30.733960 1 range_allocator.go:374] Set node auto-20220221084933-6550 PodCIDR to [10.244.0.0/24] I0221 08:57:30.744945 1 shared_informer.go:247] Caches are synced for endpoint_slice I0221 08:57:30.768858 1 shared_informer.go:247] Caches are synced for persistent volume I0221 08:57:30.768854 1 shared_informer.go:247] Caches are synced for attach detach I0221 08:57:30.768874 1 shared_informer.go:247] Caches are synced for TTL I0221 08:57:30.785681 1 shared_informer.go:247] Caches are synced for daemon sets I0221 08:57:30.818241 1 shared_informer.go:247] Caches are synced for GC I0221 08:57:30.823685 1 shared_informer.go:247] Caches are synced for taint I0221 08:57:30.823827 1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: I0221 08:57:30.823901 1 taint_manager.go:187] "Starting NoExecuteTaintManager" W0221 08:57:30.823933 1 node_lifecycle_controller.go:1012] Missing timestamp for Node auto-20220221084933-6550. Assuming now as a timestamp. I0221 08:57:30.823990 1 node_lifecycle_controller.go:1213] Controller detected that zone is now in state Normal. I0221 08:57:30.824086 1 event.go:294] "Event occurred" object="auto-20220221084933-6550" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node auto-20220221084933-6550 event: Registered Node auto-20220221084933-6550 in Controller" I0221 08:57:31.188640 1 shared_informer.go:247] Caches are synced for garbage collector I0221 08:57:31.188666 1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0221 08:57:31.210469 1 shared_informer.go:247] Caches are synced for garbage collector I0221 08:57:31.429366 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-6wgl9" I0221 08:57:31.436890 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-rg6k7" I0221 08:57:31.558096 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1" I0221 08:57:31.605840 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-6wgl9" I0221 08:57:31.615476 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-j6t4r" I0221 09:01:46.079752 1 event.go:294] "Event occurred" object="default/netcat" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set netcat-668db85669 to 1" I0221 09:01:46.088592 1 event.go:294] "Event occurred" object="default/netcat-668db85669" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: netcat-668db85669-v8bk5" * * ==> kube-proxy [76924ebff838] <== * I0221 08:57:32.804263 1 node.go:163] Successfully retrieved node IP: 192.168.76.2 I0221 08:57:32.804361 1 server_others.go:138] "Detected node IP" address="192.168.76.2" I0221 08:57:32.804399 1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode="" I0221 08:57:32.910626 1 server_others.go:206] "Using iptables Proxier" I0221 08:57:32.910827 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4 I0221 08:57:32.910952 1 server_others.go:214] "Creating dualStackProxier for iptables" I0221 08:57:32.911065 1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6" I0221 08:57:32.911574 1 server.go:656] "Version info" version="v1.23.4" I0221 08:57:32.916913 1 config.go:317] "Starting service config controller" I0221 08:57:32.916951 1 shared_informer.go:240] Waiting for caches to sync for service config I0221 08:57:32.933406 1 config.go:226] "Starting endpoint slice config controller" I0221 08:57:32.934316 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I0221 08:57:33.017817 1 shared_informer.go:247] Caches are synced for service config I0221 08:57:33.035222 1 shared_informer.go:247] Caches are synced for endpoint slice config * * ==> kube-scheduler [c78588822ac6] <== * E0221 08:57:14.931862 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0221 08:57:14.931721 1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0221 08:57:14.932362 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0221 08:57:15.757399 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0221 08:57:15.757446 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0221 08:57:15.834131 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0221 08:57:15.834178 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0221 08:57:15.868186 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0221 08:57:15.868218 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0221 08:57:15.898542 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0221 08:57:15.898573 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0221 08:57:16.026736 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0221 08:57:16.026774 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0221 08:57:16.096389 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0221 08:57:16.096418 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0221 08:57:16.114163 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0221 08:57:16.114197 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0221 08:57:16.115203 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0221 08:57:16.115239 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0221 08:57:16.327598 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system" W0221 08:57:16.339522 1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0221 08:57:16.339559 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0221 08:57:17.123087 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system" E0221 08:57:17.415353 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system" I0221 08:57:18.921914 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Mon 2022-02-21 08:57:00 UTC, end at Mon 2022-02-21 09:07:22 UTC. -- Feb 21 09:04:21 auto-20220221084933-6550 kubelet[2016]: E0221 09:04:21.536776 2016 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(cb2b449c-788d-4efb-9f51-1de24e609c8b)\"" pod="kube-system/storage-provisioner" podUID=cb2b449c-788d-4efb-9f51-1de24e609c8b Feb 21 09:04:32 auto-20220221084933-6550 kubelet[2016]: I0221 09:04:32.536644 2016 scope.go:110] "RemoveContainer" containerID="eb7ea3e7bcfd9ed6e4cfecafe560ef806c60bf5af7c2673497a39a09f6455dac" Feb 21 09:04:32 auto-20220221084933-6550 kubelet[2016]: E0221 09:04:32.536889 2016 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(cb2b449c-788d-4efb-9f51-1de24e609c8b)\"" pod="kube-system/storage-provisioner" podUID=cb2b449c-788d-4efb-9f51-1de24e609c8b Feb 21 09:04:46 auto-20220221084933-6550 kubelet[2016]: I0221 09:04:46.536596 2016 scope.go:110] "RemoveContainer" containerID="eb7ea3e7bcfd9ed6e4cfecafe560ef806c60bf5af7c2673497a39a09f6455dac" Feb 21 09:04:46 auto-20220221084933-6550 kubelet[2016]: E0221 09:04:46.536816 2016 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(cb2b449c-788d-4efb-9f51-1de24e609c8b)\"" pod="kube-system/storage-provisioner" podUID=cb2b449c-788d-4efb-9f51-1de24e609c8b Feb 21 09:04:58 auto-20220221084933-6550 kubelet[2016]: I0221 09:04:58.536322 2016 scope.go:110] "RemoveContainer" containerID="eb7ea3e7bcfd9ed6e4cfecafe560ef806c60bf5af7c2673497a39a09f6455dac" Feb 21 09:04:58 auto-20220221084933-6550 kubelet[2016]: E0221 09:04:58.536530 2016 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(cb2b449c-788d-4efb-9f51-1de24e609c8b)\"" pod="kube-system/storage-provisioner" podUID=cb2b449c-788d-4efb-9f51-1de24e609c8b Feb 21 09:05:12 auto-20220221084933-6550 kubelet[2016]: I0221 09:05:12.536010 2016 scope.go:110] "RemoveContainer" containerID="eb7ea3e7bcfd9ed6e4cfecafe560ef806c60bf5af7c2673497a39a09f6455dac" Feb 21 09:05:12 auto-20220221084933-6550 kubelet[2016]: E0221 09:05:12.536215 2016 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(cb2b449c-788d-4efb-9f51-1de24e609c8b)\"" pod="kube-system/storage-provisioner" podUID=cb2b449c-788d-4efb-9f51-1de24e609c8b Feb 21 09:05:25 auto-20220221084933-6550 kubelet[2016]: I0221 09:05:25.536394 2016 scope.go:110] "RemoveContainer" containerID="eb7ea3e7bcfd9ed6e4cfecafe560ef806c60bf5af7c2673497a39a09f6455dac" Feb 21 09:05:25 auto-20220221084933-6550 kubelet[2016]: E0221 09:05:25.536660 2016 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(cb2b449c-788d-4efb-9f51-1de24e609c8b)\"" pod="kube-system/storage-provisioner" podUID=cb2b449c-788d-4efb-9f51-1de24e609c8b Feb 21 09:05:38 auto-20220221084933-6550 kubelet[2016]: I0221 09:05:38.536059 2016 scope.go:110] "RemoveContainer" containerID="eb7ea3e7bcfd9ed6e4cfecafe560ef806c60bf5af7c2673497a39a09f6455dac" Feb 21 09:05:38 auto-20220221084933-6550 kubelet[2016]: E0221 09:05:38.536255 2016 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(cb2b449c-788d-4efb-9f51-1de24e609c8b)\"" pod="kube-system/storage-provisioner" podUID=cb2b449c-788d-4efb-9f51-1de24e609c8b Feb 21 09:05:49 auto-20220221084933-6550 kubelet[2016]: I0221 09:05:49.535615 2016 scope.go:110] "RemoveContainer" containerID="eb7ea3e7bcfd9ed6e4cfecafe560ef806c60bf5af7c2673497a39a09f6455dac" Feb 21 09:05:49 auto-20220221084933-6550 kubelet[2016]: E0221 09:05:49.535838 2016 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(cb2b449c-788d-4efb-9f51-1de24e609c8b)\"" pod="kube-system/storage-provisioner" podUID=cb2b449c-788d-4efb-9f51-1de24e609c8b Feb 21 09:06:00 auto-20220221084933-6550 kubelet[2016]: I0221 09:06:00.536189 2016 scope.go:110] "RemoveContainer" containerID="eb7ea3e7bcfd9ed6e4cfecafe560ef806c60bf5af7c2673497a39a09f6455dac" Feb 21 09:06:00 auto-20220221084933-6550 kubelet[2016]: E0221 09:06:00.536424 2016 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(cb2b449c-788d-4efb-9f51-1de24e609c8b)\"" pod="kube-system/storage-provisioner" podUID=cb2b449c-788d-4efb-9f51-1de24e609c8b Feb 21 09:06:13 auto-20220221084933-6550 kubelet[2016]: I0221 09:06:13.536093 2016 scope.go:110] "RemoveContainer" containerID="eb7ea3e7bcfd9ed6e4cfecafe560ef806c60bf5af7c2673497a39a09f6455dac" Feb 21 09:06:44 auto-20220221084933-6550 kubelet[2016]: I0221 09:06:44.461250 2016 scope.go:110] "RemoveContainer" containerID="eb7ea3e7bcfd9ed6e4cfecafe560ef806c60bf5af7c2673497a39a09f6455dac" Feb 21 09:06:44 auto-20220221084933-6550 kubelet[2016]: I0221 09:06:44.461561 2016 scope.go:110] "RemoveContainer" containerID="88e3a5d7acafa1029dcb1e4d17a83105230bce1b9394bc49b9a690cef8400e2b" Feb 21 09:06:44 auto-20220221084933-6550 kubelet[2016]: E0221 09:06:44.461786 2016 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(cb2b449c-788d-4efb-9f51-1de24e609c8b)\"" pod="kube-system/storage-provisioner" podUID=cb2b449c-788d-4efb-9f51-1de24e609c8b Feb 21 09:06:56 auto-20220221084933-6550 kubelet[2016]: I0221 09:06:56.535695 2016 scope.go:110] "RemoveContainer" containerID="88e3a5d7acafa1029dcb1e4d17a83105230bce1b9394bc49b9a690cef8400e2b" Feb 21 09:06:56 auto-20220221084933-6550 kubelet[2016]: E0221 09:06:56.535984 2016 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(cb2b449c-788d-4efb-9f51-1de24e609c8b)\"" pod="kube-system/storage-provisioner" podUID=cb2b449c-788d-4efb-9f51-1de24e609c8b Feb 21 09:07:09 auto-20220221084933-6550 kubelet[2016]: I0221 09:07:09.535800 2016 scope.go:110] "RemoveContainer" containerID="88e3a5d7acafa1029dcb1e4d17a83105230bce1b9394bc49b9a690cef8400e2b" Feb 21 09:07:09 auto-20220221084933-6550 kubelet[2016]: E0221 09:07:09.536018 2016 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(cb2b449c-788d-4efb-9f51-1de24e609c8b)\"" pod="kube-system/storage-provisioner" podUID=cb2b449c-788d-4efb-9f51-1de24e609c8b * * ==> storage-provisioner [88e3a5d7acaf] <== * I0221 09:06:13.663212 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... F0221 09:06:43.666342 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout -- /stdout -- helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p auto-20220221084933-6550 -n auto-20220221084933-6550 helpers_test.go:262: (dbg) Run: kubectl --context auto-20220221084933-6550 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running helpers_test.go:271: non-running pods: helpers_test.go:273: ======> post-mortem[TestNetworkPlugins/group/auto]: describe non-running pods <====== helpers_test.go:276: (dbg) Run: kubectl --context auto-20220221084933-6550 describe pod helpers_test.go:276: (dbg) Non-zero exit: kubectl --context auto-20220221084933-6550 describe pod : exit status 1 (40.092807ms) ** stderr ** error: resource name may not be empty ** /stderr ** helpers_test.go:278: kubectl --context auto-20220221084933-6550 describe pod : exit status 1 helpers_test.go:176: Cleaning up "auto-20220221084933-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p auto-20220221084933-6550 helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p auto-20220221084933-6550: (2.670258695s) === CONT TestNetworkPlugins/group/kubenet === RUN TestNetworkPlugins/group/kubenet/Start net_test.go:99: (dbg) Run: out/minikube-linux-amd64 start -p kubenet-20220221084933-6550 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker --container-runtime=docker === CONT TestNetworkPlugins/group/kindnet/DNS net_test.go:163: (dbg) Run: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default E0221 09:07:30.569068 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory E0221 09:07:38.292715 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.151937061s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default === CONT TestNetworkPlugins/group/enable-default-cni/Start net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20220221084933-6550 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker --container-runtime=docker: (4m54.694157846s) === RUN TestNetworkPlugins/group/enable-default-cni/KubeletFlags net_test.go:120: (dbg) Run: out/minikube-linux-amd64 ssh -p enable-default-cni-20220221084933-6550 "pgrep -a kubelet" === RUN TestNetworkPlugins/group/enable-default-cni/NetCatPod net_test.go:132: (dbg) Run: kubectl --context enable-default-cni-20220221084933-6550 replace --force -f testdata/netcat-deployment.yaml net_test.go:146: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ... helpers_test.go:343: "netcat-668db85669-fm848" [813ad8bd-230d-4b32-81c6-ab2109b7e0a7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils]) E0221 09:08:29.174519 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory helpers_test.go:343: "netcat-668db85669-fm848" [813ad8bd-230d-4b32-81c6-ab2109b7e0a7] Running === CONT TestNetworkPlugins/group/kindnet/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.14103386s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** === CONT TestNetworkPlugins/group/enable-default-cni/NetCatPod net_test.go:146: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.006467697s === RUN TestNetworkPlugins/group/enable-default-cni/DNS net_test.go:163: (dbg) Run: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default === CONT TestNetworkPlugins/group/bridge/Start net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20220221084933-6550 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker --container-runtime=docker: (4m50.530470557s) === RUN TestNetworkPlugins/group/bridge/KubeletFlags net_test.go:120: (dbg) Run: out/minikube-linux-amd64 ssh -p bridge-20220221084933-6550 "pgrep -a kubelet" === RUN TestNetworkPlugins/group/bridge/NetCatPod net_test.go:132: (dbg) Run: kubectl --context bridge-20220221084933-6550 replace --force -f testdata/netcat-deployment.yaml net_test.go:146: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ... helpers_test.go:343: "netcat-668db85669-f2pzb" [01d71b96-fb12-4f85-808c-6495638c70c6] Pending helpers_test.go:343: "netcat-668db85669-f2pzb" [01d71b96-fb12-4f85-808c-6495638c70c6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils]) helpers_test.go:343: "netcat-668db85669-f2pzb" [01d71b96-fb12-4f85-808c-6495638c70c6] Running net_test.go:146: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.006812868s === RUN TestNetworkPlugins/group/bridge/DNS net_test.go:163: (dbg) Run: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default === CONT TestNetworkPlugins/group/enable-default-cni/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.155817581s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default E0221 09:09:00.213316 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory E0221 09:09:05.984497 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory === CONT TestNetworkPlugins/group/bridge/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.160193644s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default === CONT TestNetworkPlugins/group/enable-default-cni/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.137360185s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default === CONT TestNetworkPlugins/group/bridge/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.127958606s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default === CONT TestNetworkPlugins/group/enable-default-cni/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.127057858s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default === CONT TestNetworkPlugins/group/kindnet/DNS net_test.go:163: (dbg) Run: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default E0221 09:09:33.149088 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory === CONT TestNetworkPlugins/group/bridge/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.146887863s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** === CONT TestNetworkPlugins/group/enable-default-cni/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.136062008s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** === CONT TestNetworkPlugins/group/bridge/DNS net_test.go:163: (dbg) Run: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default === CONT TestNetworkPlugins/group/kindnet/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.253102507s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1 net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"* === CONT TestNetworkPlugins/group/kindnet net_test.go:198: "kindnet" test finished in 20m9.755113612s, failed=true net_test.go:199: *** TestNetworkPlugins/group/kindnet FAILED at 2022-02-21 09:09:43.756287859 +0000 UTC m=+2676.518607461 helpers_test.go:223: -----------------------post-mortem-------------------------------- helpers_test.go:231: ======> post-mortem[TestNetworkPlugins/group/kindnet]: docker inspect <====== helpers_test.go:232: (dbg) Run: docker inspect kindnet-20220221084934-6550 helpers_test.go:236: (dbg) docker inspect kindnet-20220221084934-6550: -- stdout -- [ { "Id": "c1e6246a98754ece55098fded4ad0fdee8bac096c2ada346a21bcfdcf7c74ea8", "Created": "2022-02-21T09:02:53.536636017Z", "Path": "/usr/local/bin/entrypoint", "Args": [ "/sbin/init" ], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 423673, "ExitCode": 0, "Error": "", "StartedAt": "2022-02-21T09:02:53.928380162Z", "FinishedAt": "0001-01-01T00:00:00Z" }, "Image": "sha256:1312ccd2422d964b2df363d606d0c016d6acbc1ddf0211c26a74717f2897dc43", "ResolvConfPath": "/var/lib/docker/containers/c1e6246a98754ece55098fded4ad0fdee8bac096c2ada346a21bcfdcf7c74ea8/resolv.conf", "HostnamePath": "/var/lib/docker/containers/c1e6246a98754ece55098fded4ad0fdee8bac096c2ada346a21bcfdcf7c74ea8/hostname", "HostsPath": "/var/lib/docker/containers/c1e6246a98754ece55098fded4ad0fdee8bac096c2ada346a21bcfdcf7c74ea8/hosts", "LogPath": "/var/lib/docker/containers/c1e6246a98754ece55098fded4ad0fdee8bac096c2ada346a21bcfdcf7c74ea8/c1e6246a98754ece55098fded4ad0fdee8bac096c2ada346a21bcfdcf7c74ea8-json.log", "Name": "/kindnet-20220221084934-6550", "RestartCount": 0, "Driver": "overlay2", "Platform": "linux", "MountLabel": "", "ProcessLabel": "", "AppArmorProfile": "unconfined", "ExecIDs": null, "HostConfig": { "Binds": [ "/lib/modules:/lib/modules:ro", "kindnet-20220221084934-6550:/var" ], "ContainerIDFile": "", "LogConfig": { "Type": "json-file", "Config": {} }, "NetworkMode": "kindnet-20220221084934-6550", "PortBindings": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ] }, "RestartPolicy": { "Name": "no", "MaximumRetryCount": 0 }, "AutoRemove": false, "VolumeDriver": "", "VolumesFrom": null, "CapAdd": null, "CapDrop": null, "CgroupnsMode": "host", "Dns": [], "DnsOptions": [], "DnsSearch": [], "ExtraHosts": null, "GroupAdd": null, "IpcMode": "private", "Cgroup": "", "Links": null, "OomScoreAdj": 0, "PidMode": "", "Privileged": true, "PublishAllPorts": false, "ReadonlyRootfs": false, "SecurityOpt": [ "seccomp=unconfined", "apparmor=unconfined", "label=disable" ], "Tmpfs": { "/run": "", "/tmp": "" }, "UTSMode": "", "UsernsMode": "", "ShmSize": 67108864, "Runtime": "runc", "ConsoleSize": [ 0, 0 ], "Isolation": "", "CpuShares": 0, "Memory": 0, "NanoCpus": 2000000000, "CgroupParent": "", "BlkioWeight": 0, "BlkioWeightDevice": [], "BlkioDeviceReadBps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteIOps": null, "CpuPeriod": 0, "CpuQuota": 0, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": [ { "PathOnHost": "/dev/fuse", "PathInContainer": "/dev/fuse", "CgroupPermissions": "rwm" } ], "DeviceCgroupRules": null, "DeviceRequests": null, "KernelMemory": 0, "KernelMemoryTCP": 0, "MemoryReservation": 0, "MemorySwap": 0, "MemorySwappiness": null, "OomKillDisable": false, "PidsLimit": null, "Ulimits": null, "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0, "MaskedPaths": null, "ReadonlyPaths": null }, "GraphDriver": { "Data": { "LowerDir": "/var/lib/docker/overlay2/8cae0888aea74361910435acf7cb12e68553c58d082e4d1dd05a51358b965804-init/diff:/var/lib/docker/overlay2/a149cc5f42de5038ba3e5f21bd82a6356aa9456aec11a3690e569db72ca8eaf5/diff:/var/lib/docker/overlay2/fce4ef34d92ff253276822af091ec16578b924486e0e565c7f34b479bdbe1606/diff:/var/lib/docker/overlay2/a38e10ed288ad73a8ce637c6219d5b13318bb72210ce4fb6291dd57d238df611/diff:/var/lib/docker/overlay2/93c18ddce49a4eb2d344b0579d7f58df999d53bb329042f78a5b3f40296f83cd/diff:/var/lib/docker/overlay2/d9ee14262b31fff87924cf913d835b02182e6a8dc3783ecb5f67dbd29faa079c/diff:/var/lib/docker/overlay2/8a0b39fc5ebcb9761ad433102244050fab339d3e027a5a30a7e01d121fe362e4/diff:/var/lib/docker/overlay2/7a6b41deef37b6400fac0c94871f73040d4cae157de2e929b7a0bf96f0e353aa/diff:/var/lib/docker/overlay2/49d913ee587718efcf521a65de0109df5c2c42644f75690beed97a437d623b66/diff:/var/lib/docker/overlay2/89e8fa960313dfb6ece4007576bf5d89e2b0ea68a2bde9de5527647a7df6ff0f/diff:/var/lib/docker/overlay2/0d2632e6ee2f4198f3c7c0d37c31ee03d33864e0131ae85dd3001adecfae53db/diff:/var/lib/docker/overlay2/b28a3b2bbe2e231d00863aaaf6be28d2e5a76197f2be7934f18720a71f2435a8/diff:/var/lib/docker/overlay2/6f88e83ff5bb26bf909248d8df5eacf4d01f72523927fd7433a925c525fbd274/diff:/var/lib/docker/overlay2/c2547d18d2639fa842b0977952fa69cffd7b076e8db357ef0ec62fae9676619b/diff:/var/lib/docker/overlay2/34465ba37bb8eef87906c38cb891a738c54a16dbf47d27fef95cee96f0b62c4f/diff:/var/lib/docker/overlay2/50d3383518bb8e6a01e1cfcae621b8c2238dc54ae94d63606d145633045bc6cd/diff:/var/lib/docker/overlay2/83561f107aed2bf9cb840ff66aceb882d0e39bcced79ef27ce67c17e063ba536/diff:/var/lib/docker/overlay2/bfb7c0c91e0a7649286abe3a6b02df4585a931ffa3a6b1e4846aa1b98eb0d997/diff:/var/lib/docker/overlay2/80589044da151f3d3eed7824d236bc8a19697182abca8610eb34f1c3cae53bf0/diff:/var/lib/docker/overlay2/5a336e956e467ad50830aa4b9ed5b65cb3ef08e6d49a4900acd0440ca3c03c3a/diff:/var/lib/docker/overlay2/f7b3446c0dca80a2053183e360de8f99e3a31e87ad633fcc23f14b2045643596/diff:/var/lib/docker/overlay2/03688a16272f13f172ed8baa181df2a1702e08edbc9c5a4a767265adb2c00b79/diff:/var/lib/docker/overlay2/c5445afaabab58221dbffedde261b3f9948116ccba1344b6f3dd4fdfaf5c98cd/diff:/var/lib/docker/overlay2/704464fd20f23f93d0ad3c3c58103f01fc8a0d76d70bc191f718273e7412bb42/diff:/var/lib/docker/overlay2/a344f9ca1bf1b674ebf560c2a0498913ed90c836e3b4f5b1968022f7f7e54055/diff:/var/lib/docker/overlay2/929f053e6a327a7203e8d8c281f827d02ad922fdabbdfcc5453003daba88cdf4/diff:/var/lib/docker/overlay2/d6a160f0ffe94e5abf99d928a638f6a6220b24c7ba643a438c7557146d240f70/diff:/var/lib/docker/overlay2/c6b58e187a2df10fd55e19e6ad38c20f5ebf5e62617a59b6ce4afe0fc712c0f6/diff:/var/lib/docker/overlay2/fef43deeff878af42cc1f31a1f265b66e642aa166cec02eba5a0c82fb6d70872/diff:/var/lib/docker/overlay2/0d7bbf19182056a05efca89485df54da5fba9903000517fc9e7c2072aa7ba62e/diff:/var/lib/docker/overlay2/5840748964e9ce3890a2fa4fc2e3fb0635426f944b42b3128808a7aca816bf48/diff:/var/lib/docker/overlay2/a94126962e9f346960e3f570b96e0a6e987d255fcba30bf3fd76b227e66275c6/diff:/var/lib/docker/overlay2/211ba88e1557a352838b4896dff8efac70330c1424cff237613c4186994ec037/diff:/var/lib/docker/overlay2/c04720d23411d6693832990e2c79bc529ade609401eaeee8d159f5a6d74d986e/diff:/var/lib/docker/overlay2/70bfe328c89d3a3902a4ba591874fe936425b208ec8e39c2e25d60f21b212300/diff:/var/lib/docker/overlay2/7dcce8bec6dd9561387116c9f724b9101e94e15c0e19c7880ce2b4e96b2921a6/diff:/var/lib/docker/overlay2/81463813805e91be42d9f71b5c63823a80abaaa42d8638034eaddcd66c148310/diff:/var/lib/docker/overlay2/c7dbb1344cf7167b07d58d49902e0db9335bc92a71a324c15dadc14c0aa67c45/diff:/var/lib/docker/overlay2/909df0d6b92b92bcf6e376f0e33d120c53cd24ad0b997d31b5a84f3d1ab89769/diff:/var/lib/docker/overlay2/eacfeaee6ed4aa14e847adad2c6d03439c53944654cc2779cc5c80d239ab323c/diff:/var/lib/docker/overlay2/48c58492169f9043495b11b47439f4dcdeb168525e5bb3a8c130c1ddd5ec882d/diff:/var/lib/docker/overlay2/4520a69f91e766b7694473c2107cc0644394d65da900259eb5933779accdc86d/diff:/var/lib/docker/overlay2/68aafdd300c4b4ebf94976a352959e75fa170653a69e13da38650a447cd5cb2f/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394cfc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff", "MergedDir": "/var/lib/docker/overlay2/8cae0888aea74361910435acf7cb12e68553c58d082e4d1dd05a51358b965804/merged", "UpperDir": "/var/lib/docker/overlay2/8cae0888aea74361910435acf7cb12e68553c58d082e4d1dd05a51358b965804/diff", "WorkDir": "/var/lib/docker/overlay2/8cae0888aea74361910435acf7cb12e68553c58d082e4d1dd05a51358b965804/work" }, "Name": "overlay2" }, "Mounts": [ { "Type": "bind", "Source": "/lib/modules", "Destination": "/lib/modules", "Mode": "ro", "RW": false, "Propagation": "rprivate" }, { "Type": "volume", "Name": "kindnet-20220221084934-6550", "Source": "/var/lib/docker/volumes/kindnet-20220221084934-6550/_data", "Destination": "/var", "Driver": "local", "Mode": "z", "RW": true, "Propagation": "" } ], "Config": { "Hostname": "kindnet-20220221084934-6550", "Domainname": "", "User": "root", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "ExposedPorts": { "22/tcp": {}, "2376/tcp": {}, "32443/tcp": {}, "5000/tcp": {}, "8443/tcp": {} }, "Tty": true, "OpenStdin": false, "StdinOnce": false, "Env": [ "container=docker", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" ], "Cmd": null, "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2", "Volumes": null, "WorkingDir": "", "Entrypoint": [ "/usr/local/bin/entrypoint", "/sbin/init" ], "OnBuild": null, "Labels": { "created_by.minikube.sigs.k8s.io": "true", "mode.minikube.sigs.k8s.io": "kindnet-20220221084934-6550", "name.minikube.sigs.k8s.io": "kindnet-20220221084934-6550", "role.minikube.sigs.k8s.io": "" }, "StopSignal": "SIGRTMIN+3" }, "NetworkSettings": { "Bridge": "", "SandboxID": "64a33474aca43e4c210eb7d638d4895ff263c795f7e4d8f9cf9b27e15672955f", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49384" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49383" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49380" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49382" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49381" } ] }, "SandboxKey": "/var/run/docker/netns/64a33474aca4", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null, "EndpointID": "", "Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "MacAddress": "", "Networks": { "kindnet-20220221084934-6550": { "IPAMConfig": { "IPv4Address": "192.168.49.2" }, "Links": null, "Aliases": [ "c1e6246a9875", "kindnet-20220221084934-6550" ], "NetworkID": "5d96ab4d6b1ae076cca503cf53d5c36ffb8868b0be10b67aca009ffaf43ed991", "EndpointID": "48eee4fc9b8f861162fedf1f848e7419fd58c043f8784d407ccf05d104b0ad30", "Gateway": "192.168.49.1", "IPAddress": "192.168.49.2", "IPPrefixLen": 24, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:c0:a8:31:02", "DriverOpts": null } } } } ] -- /stdout -- helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p kindnet-20220221084934-6550 -n kindnet-20220221084934-6550 helpers_test.go:245: <<< TestNetworkPlugins/group/kindnet FAILED: start of post-mortem logs <<< helpers_test.go:246: ======> post-mortem[TestNetworkPlugins/group/kindnet]: minikube logs <====== helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 -p kindnet-20220221084934-6550 logs -n 25 helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p kindnet-20220221084934-6550 logs -n 25: (1.077134548s) helpers_test.go:253: TestNetworkPlugins/group/kindnet logs: -- stdout -- * * ==> Audit <== * |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| | logs | -p | stopped-upgrade-20220221085315-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:27 UTC | Mon, 21 Feb 2022 08:54:29 UTC | | | stopped-upgrade-20220221085315-6550 | | | | | | | delete | -p | stopped-upgrade-20220221085315-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:29 UTC | Mon, 21 Feb 2022 08:54:31 UTC | | | stopped-upgrade-20220221085315-6550 | | | | | | | start | -p | cert-expiration-20220221085105-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:40 UTC | Mon, 21 Feb 2022 08:54:44 UTC | | | cert-expiration-20220221085105-6550 | | | | | | | | --memory=2048 | | | | | | | | --cert-expiration=8760h | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | delete | -p | cert-expiration-20220221085105-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:44 UTC | Mon, 21 Feb 2022 08:54:47 UTC | | | cert-expiration-20220221085105-6550 | | | | | | | start | -p cilium-20220221084934-6550 | cilium-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:33 UTC | Mon, 21 Feb 2022 08:55:10 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=cilium --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p cilium-20220221084934-6550 | cilium-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:55:15 UTC | Mon, 21 Feb 2022 08:55:16 UTC | | | pgrep -a kubelet | | | | | | | delete | -p cilium-20220221084934-6550 | cilium-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:55:29 UTC | Mon, 21 Feb 2022 08:55:33 UTC | | start | -p false-20220221084934-6550 | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:55:33 UTC | Mon, 21 Feb 2022 08:56:15 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=false --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p false-20220221084934-6550 | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:56:15 UTC | Mon, 21 Feb 2022 08:56:16 UTC | | | pgrep -a kubelet | | | | | | | start | -p auto-20220221084933-6550 | auto-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:29 UTC | Mon, 21 Feb 2022 09:01:45 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p auto-20220221084933-6550 | auto-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:01:45 UTC | Mon, 21 Feb 2022 09:01:45 UTC | | | pgrep -a kubelet | | | | | | | -p | false-20220221084934-6550 logs | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:02:41 UTC | Mon, 21 Feb 2022 09:02:42 UTC | | | -n 25 | | | | | | | delete | -p false-20220221084934-6550 | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:02:43 UTC | Mon, 21 Feb 2022 09:02:46 UTC | | -p | custom-weave-20220221084934-6550 | custom-weave-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:27 UTC | Mon, 21 Feb 2022 09:03:28 UTC | | | logs -n 25 | | | | | | | delete | -p | custom-weave-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:29 UTC | Mon, 21 Feb 2022 09:03:32 UTC | | | custom-weave-20220221084934-6550 | | | | | | | start | -p kindnet-20220221084934-6550 | kindnet-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:02:46 UTC | Mon, 21 Feb 2022 09:03:35 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=kindnet --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p kindnet-20220221084934-6550 | kindnet-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:40 UTC | Mon, 21 Feb 2022 09:03:40 UTC | | | pgrep -a kubelet | | | | | | | -p | calico-20220221084934-6550 | calico-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:45 UTC | Mon, 21 Feb 2022 09:03:47 UTC | | | logs -n 25 | | | | | | | delete | -p calico-20220221084934-6550 | calico-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:48 UTC | Mon, 21 Feb 2022 09:03:50 UTC | | -p | auto-20220221084933-6550 logs | auto-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:07:20 UTC | Mon, 21 Feb 2022 09:07:22 UTC | | | -n 25 | | | | | | | delete | -p auto-20220221084933-6550 | auto-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:07:22 UTC | Mon, 21 Feb 2022 09:07:25 UTC | | start | -p | enable-default-cni-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:32 UTC | Mon, 21 Feb 2022 09:08:26 UTC | | | enable-default-cni-20220221084933-6550 | | | | | | | | --memory=2048 --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --enable-default-cni=true | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p | enable-default-cni-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:08:26 UTC | Mon, 21 Feb 2022 09:08:27 UTC | | | enable-default-cni-20220221084933-6550 | | | | | | | | pgrep -a kubelet | | | | | | | start | -p bridge-20220221084933-6550 | bridge-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:51 UTC | Mon, 21 Feb 2022 09:08:41 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=bridge --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p bridge-20220221084933-6550 | bridge-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:08:41 UTC | Mon, 21 Feb 2022 09:08:41 UTC | | | pgrep -a kubelet | | | | | | |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| * * ==> Last Start <== * Log file created at: 2022/02/21 09:07:25 Running on machine: ubuntu-20-agent-5 Binary: Built with gc go1.17.7 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0221 09:07:25.365204 462115 out.go:297] Setting OutFile to fd 1 ... I0221 09:07:25.365306 462115 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 09:07:25.365316 462115 out.go:310] Setting ErrFile to fd 2... I0221 09:07:25.365320 462115 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 09:07:25.365432 462115 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/bin I0221 09:07:25.365703 462115 out.go:304] Setting JSON to false I0221 09:07:25.367382 462115 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":3000,"bootTime":1645431446,"procs":605,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"} I0221 09:07:25.367473 462115 start.go:122] virtualization: kvm guest I0221 09:07:25.370626 462115 out.go:176] * [kubenet-20220221084933-6550] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64) I0221 09:07:25.372118 462115 out.go:176] - MINIKUBE_LOCATION=13641 I0221 09:07:25.370818 462115 notify.go:193] Checking for updates... I0221 09:07:25.373713 462115 out.go:176] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true I0221 09:07:25.375244 462115 out.go:176] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 09:07:25.376596 462115 out.go:176] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube I0221 09:07:25.378032 462115 out.go:176] - MINIKUBE_BIN=out/minikube-linux-amd64 I0221 09:07:25.378593 462115 config.go:176] Loaded profile config "bridge-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:07:25.378683 462115 config.go:176] Loaded profile config "enable-default-cni-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:07:25.378760 462115 config.go:176] Loaded profile config "kindnet-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:07:25.378816 462115 driver.go:344] Setting default libvirt URI to qemu:///system I0221 09:07:25.423110 462115 docker.go:132] docker version: linux-20.10.12 I0221 09:07:25.423225 462115 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 09:07:25.519741 462115 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-02-21 09:07:25.456330991 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 09:07:25.519904 462115 docker.go:237] overlay module found I0221 09:07:25.522315 462115 out.go:176] * Using the docker driver based on user configuration I0221 09:07:25.522340 462115 start.go:281] selected driver: docker I0221 09:07:25.522345 462115 start.go:798] validating driver "docker" against I0221 09:07:25.522361 462115 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} W0221 09:07:25.522420 462115 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. W0221 09:07:25.522438 462115 out.go:241] ! Your cgroup does not allow setting memory. I0221 09:07:25.524080 462115 out.go:176] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities I0221 09:07:25.524710 462115 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 09:07:25.619214 462115 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-02-21 09:07:25.556542364 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 09:07:25.619324 462115 start_flags.go:288] no existing cluster config was found, will generate one from the flags I0221 09:07:25.619470 462115 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m I0221 09:07:25.619492 462115 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] I0221 09:07:25.619507 462115 cni.go:89] network plugin configured as "kubenet", returning disabled I0221 09:07:25.619518 462115 start_flags.go:302] config: {Name:kubenet-20220221084933-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:kubenet-20220221084933-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 09:07:25.622014 462115 out.go:176] * Starting control plane node kubenet-20220221084933-6550 in cluster kubenet-20220221084933-6550 I0221 09:07:25.622065 462115 cache.go:120] Beginning downloading kic base image for docker with docker I0221 09:07:25.623707 462115 out.go:176] * Pulling base image ... I0221 09:07:25.623738 462115 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 09:07:25.623772 462115 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4 I0221 09:07:25.623791 462115 cache.go:57] Caching tarball of preloaded images I0221 09:07:25.623831 462115 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon I0221 09:07:25.624045 462115 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0221 09:07:25.624062 462115 cache.go:60] Finished verifying existence of preloaded tar for v1.23.4 on docker I0221 09:07:25.624170 462115 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/config.json ... I0221 09:07:25.624203 462115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/config.json: {Name:mk436cd9a3d44441ff51e526a3022ca41e7119cc Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:07:25.670154 462115 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon, skipping pull I0221 09:07:25.670180 462115 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 exists in daemon, skipping load I0221 09:07:25.670197 462115 cache.go:208] Successfully downloaded all kic artifacts I0221 09:07:25.670229 462115 start.go:313] acquiring machines lock for kubenet-20220221084933-6550: {Name:mkef701a995f5d6461266930b6bc546896915ade Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:07:25.670358 462115 start.go:317] acquired machines lock for "kubenet-20220221084933-6550" in 111.979µs I0221 09:07:25.670381 462115 start.go:89] Provisioning new machine with config: &{Name:kubenet-20220221084933-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:kubenet-20220221084933-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true} I0221 09:07:25.670463 462115 start.go:126] createHost starting for "" (driver="docker") I0221 09:07:22.956716 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:25.455908 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:22.936280 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:25.434799 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:25.672815 462115 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ... I0221 09:07:25.673042 462115 start.go:160] libmachine.API.Create for "kubenet-20220221084933-6550" (driver="docker") I0221 09:07:25.673070 462115 client.go:168] LocalClient.Create starting I0221 09:07:25.673128 462115 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem I0221 09:07:25.673157 462115 main.go:130] libmachine: Decoding PEM data... I0221 09:07:25.673177 462115 main.go:130] libmachine: Parsing certificate... I0221 09:07:25.673234 462115 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem I0221 09:07:25.673252 462115 main.go:130] libmachine: Decoding PEM data... I0221 09:07:25.673266 462115 main.go:130] libmachine: Parsing certificate... I0221 09:07:25.673584 462115 cli_runner.go:133] Run: docker network inspect kubenet-20220221084933-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0221 09:07:25.706536 462115 cli_runner.go:180] docker network inspect kubenet-20220221084933-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0221 09:07:25.706603 462115 network_create.go:254] running [docker network inspect kubenet-20220221084933-6550] to gather additional debugging logs... I0221 09:07:25.706621 462115 cli_runner.go:133] Run: docker network inspect kubenet-20220221084933-6550 W0221 09:07:25.739858 462115 cli_runner.go:180] docker network inspect kubenet-20220221084933-6550 returned with exit code 1 I0221 09:07:25.739894 462115 network_create.go:257] error running [docker network inspect kubenet-20220221084933-6550]: docker network inspect kubenet-20220221084933-6550: exit status 1 stdout: [] stderr: Error: No such network: kubenet-20220221084933-6550 I0221 09:07:25.739908 462115 network_create.go:259] output of [docker network inspect kubenet-20220221084933-6550]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: kubenet-20220221084933-6550 ** /stderr ** I0221 09:07:25.739962 462115 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 09:07:25.774491 462115 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-5d96ab4d6b1a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:58:0b:cb:43}} I0221 09:07:25.775193 462115 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-3436ceea5013 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:ca:78:ad:42}} I0221 09:07:25.775878 462115 network.go:240] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-0c80bded97cf IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:ac:76:f1:e1}} I0221 09:07:25.776653 462115 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.76.0:0xc0006540f8] misses:0} I0221 09:07:25.776701 462115 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}} I0221 09:07:25.776716 462115 network_create.go:106] attempt to create docker network kubenet-20220221084933-6550 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ... I0221 09:07:25.776774 462115 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220221084933-6550 I0221 09:07:25.846685 462115 network_create.go:90] docker network kubenet-20220221084933-6550 192.168.76.0/24 created I0221 09:07:25.846730 462115 kic.go:106] calculated static IP "192.168.76.2" for the "kubenet-20220221084933-6550" container I0221 09:07:25.846789 462115 cli_runner.go:133] Run: docker ps -a --format {{.Names}} I0221 09:07:25.881096 462115 cli_runner.go:133] Run: docker volume create kubenet-20220221084933-6550 --label name.minikube.sigs.k8s.io=kubenet-20220221084933-6550 --label created_by.minikube.sigs.k8s.io=true I0221 09:07:25.915063 462115 oci.go:102] Successfully created a docker volume kubenet-20220221084933-6550 I0221 09:07:25.915146 462115 cli_runner.go:133] Run: docker run --rm --name kubenet-20220221084933-6550-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-20220221084933-6550 --entrypoint /usr/bin/test -v kubenet-20220221084933-6550:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -d /var/lib I0221 09:07:26.487335 462115 oci.go:106] Successfully prepared a docker volume kubenet-20220221084933-6550 I0221 09:07:26.487375 462115 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 09:07:26.487392 462115 kic.go:179] Starting extracting preloaded images to volume ... I0221 09:07:26.487455 462115 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-20220221084933-6550:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir I0221 09:07:27.456002 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:29.956149 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:27.935384 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:29.935450 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:34.973821 462115 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-20220221084933-6550:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir: (8.486319336s) I0221 09:07:34.973862 462115 kic.go:188] duration metric: took 8.486468 seconds to extract preloaded images to volume W0221 09:07:34.973896 462115 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted. W0221 09:07:34.973905 462115 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. I0221 09:07:34.973954 462115 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'" I0221 09:07:35.070268 462115 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubenet-20220221084933-6550 --name kubenet-20220221084933-6550 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-20220221084933-6550 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubenet-20220221084933-6550 --network kubenet-20220221084933-6550 --ip 192.168.76.2 --volume kubenet-20220221084933-6550:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 I0221 09:07:32.455703 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:34.456216 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:32.435168 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:34.936085 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:35.496307 462115 cli_runner.go:133] Run: docker container inspect kubenet-20220221084933-6550 --format={{.State.Running}} I0221 09:07:35.534760 462115 cli_runner.go:133] Run: docker container inspect kubenet-20220221084933-6550 --format={{.State.Status}} I0221 09:07:35.570397 462115 cli_runner.go:133] Run: docker exec kubenet-20220221084933-6550 stat /var/lib/dpkg/alternatives/iptables I0221 09:07:35.639096 462115 oci.go:281] the created container "kubenet-20220221084933-6550" has a running status. I0221 09:07:35.639132 462115 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/kubenet-20220221084933-6550/id_rsa... I0221 09:07:35.832919 462115 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/kubenet-20220221084933-6550/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0221 09:07:35.920473 462115 cli_runner.go:133] Run: docker container inspect kubenet-20220221084933-6550 --format={{.State.Status}} I0221 09:07:35.961161 462115 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0221 09:07:35.961187 462115 kic_runner.go:114] Args: [docker exec --privileged kubenet-20220221084933-6550 chown docker:docker /home/docker/.ssh/authorized_keys] I0221 09:07:36.057451 462115 cli_runner.go:133] Run: docker container inspect kubenet-20220221084933-6550 --format={{.State.Status}} I0221 09:07:36.093451 462115 machine.go:88] provisioning docker machine ... I0221 09:07:36.093496 462115 ubuntu.go:169] provisioning hostname "kubenet-20220221084933-6550" I0221 09:07:36.093551 462115 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220221084933-6550 I0221 09:07:36.131078 462115 main.go:130] libmachine: Using SSH client type: native I0221 09:07:36.131315 462115 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49399 } I0221 09:07:36.131393 462115 main.go:130] libmachine: About to run SSH command: sudo hostname kubenet-20220221084933-6550 && echo "kubenet-20220221084933-6550" | sudo tee /etc/hostname I0221 09:07:36.264249 462115 main.go:130] libmachine: SSH cmd err, output: : kubenet-20220221084933-6550 I0221 09:07:36.264345 462115 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220221084933-6550 I0221 09:07:36.298302 462115 main.go:130] libmachine: Using SSH client type: native I0221 09:07:36.298505 462115 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49399 } I0221 09:07:36.298538 462115 main.go:130] libmachine: About to run SSH command: if ! grep -xq '.*\skubenet-20220221084933-6550' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubenet-20220221084933-6550/g' /etc/hosts; else echo '127.0.1.1 kubenet-20220221084933-6550' | sudo tee -a /etc/hosts; fi fi I0221 09:07:36.418944 462115 main.go:130] libmachine: SSH cmd err, output: : I0221 09:07:36.418972 462115 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube} I0221 09:07:36.419031 462115 ubuntu.go:177] setting up certificates I0221 09:07:36.419042 462115 provision.go:83] configureAuth start I0221 09:07:36.419102 462115 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-20220221084933-6550 I0221 09:07:36.453836 462115 provision.go:138] copyHostCerts I0221 09:07:36.453901 462115 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem, removing ... I0221 09:07:36.453915 462115 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem I0221 09:07:36.454002 462115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem (1082 bytes) I0221 09:07:36.454118 462115 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem, removing ... I0221 09:07:36.454134 462115 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem I0221 09:07:36.454166 462115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem (1123 bytes) I0221 09:07:36.454258 462115 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem, removing ... I0221 09:07:36.454273 462115 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem I0221 09:07:36.454297 462115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem (1675 bytes) I0221 09:07:36.454356 462115 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem org=jenkins.kubenet-20220221084933-6550 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubenet-20220221084933-6550] I0221 09:07:36.554325 462115 provision.go:172] copyRemoteCerts I0221 09:07:36.554377 462115 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0221 09:07:36.554408 462115 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220221084933-6550 I0221 09:07:36.590327 462115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49399 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/kubenet-20220221084933-6550/id_rsa Username:docker} I0221 09:07:36.678785 462115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes) I0221 09:07:36.697396 462115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes) I0221 09:07:36.716087 462115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0221 09:07:36.735165 462115 provision.go:86] duration metric: configureAuth took 316.110066ms I0221 09:07:36.735197 462115 ubuntu.go:193] setting minikube options for container-runtime I0221 09:07:36.735391 462115 config.go:176] Loaded profile config "kubenet-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:07:36.735436 462115 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220221084933-6550 I0221 09:07:36.771473 462115 main.go:130] libmachine: Using SSH client type: native I0221 09:07:36.771605 462115 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49399 } I0221 09:07:36.771620 462115 main.go:130] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0221 09:07:36.895259 462115 main.go:130] libmachine: SSH cmd err, output: : overlay I0221 09:07:36.895289 462115 ubuntu.go:71] root file system type: overlay I0221 09:07:36.895428 462115 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0221 09:07:36.895486 462115 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220221084933-6550 I0221 09:07:36.929080 462115 main.go:130] libmachine: Using SSH client type: native I0221 09:07:36.929241 462115 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49399 } I0221 09:07:36.929337 462115 main.go:130] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0221 09:07:37.060341 462115 main.go:130] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0221 09:07:37.060410 462115 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220221084933-6550 I0221 09:07:37.094195 462115 main.go:130] libmachine: Using SSH client type: native I0221 09:07:37.094386 462115 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49399 } I0221 09:07:37.094408 462115 main.go:130] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0221 09:07:37.752179 462115 main.go:130] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2021-12-13 11:43:42.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2022-02-21 09:07:37.057067068 +0000 @@ -1,30 +1,32 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com +BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target -Requires=docker.socket containerd.service +Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always - -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 +Restart=on-failure -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +34,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0221 09:07:37.752225 462115 machine.go:91] provisioned docker machine in 1.658745148s I0221 09:07:37.752235 462115 client.go:171] LocalClient.Create took 12.079160402s I0221 09:07:37.752251 462115 start.go:168] duration metric: libmachine.API.Create for "kubenet-20220221084933-6550" took 12.079208916s I0221 09:07:37.752259 462115 start.go:267] post-start starting for "kubenet-20220221084933-6550" (driver="docker") I0221 09:07:37.752273 462115 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0221 09:07:37.752330 462115 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0221 09:07:37.752382 462115 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220221084933-6550 I0221 09:07:37.787058 462115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49399 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/kubenet-20220221084933-6550/id_rsa Username:docker} I0221 09:07:37.878859 462115 ssh_runner.go:195] Run: cat /etc/os-release I0221 09:07:37.881740 462115 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0221 09:07:37.881781 462115 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0221 09:07:37.881789 462115 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0221 09:07:37.881794 462115 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0221 09:07:37.881802 462115 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/addons for local assets ... I0221 09:07:37.881849 462115 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files for local assets ... I0221 09:07:37.881912 462115 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem -> 65502.pem in /etc/ssl/certs I0221 09:07:37.881993 462115 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs I0221 09:07:37.889062 462115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /etc/ssl/certs/65502.pem (1708 bytes) I0221 09:07:37.908245 462115 start.go:270] post-start completed in 155.964278ms I0221 09:07:37.908729 462115 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-20220221084933-6550 I0221 09:07:37.943174 462115 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/config.json ... I0221 09:07:37.943457 462115 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0221 09:07:37.943523 462115 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220221084933-6550 I0221 09:07:37.978137 462115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49399 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/kubenet-20220221084933-6550/id_rsa Username:docker} I0221 09:07:38.063776 462115 start.go:129] duration metric: createHost completed in 12.393300397s I0221 09:07:38.063805 462115 start.go:80] releasing machines lock for "kubenet-20220221084933-6550", held for 12.393436394s I0221 09:07:38.063890 462115 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-20220221084933-6550 I0221 09:07:38.097050 462115 ssh_runner.go:195] Run: systemctl --version I0221 09:07:38.097080 462115 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/ I0221 09:07:38.097111 462115 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220221084933-6550 I0221 09:07:38.097154 462115 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220221084933-6550 I0221 09:07:38.135816 462115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49399 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/kubenet-20220221084933-6550/id_rsa Username:docker} I0221 09:07:38.136347 462115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49399 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/kubenet-20220221084933-6550/id_rsa Username:docker} I0221 09:07:38.363372 462115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0221 09:07:38.373406 462115 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 09:07:38.382853 462115 cruntime.go:272] skipping containerd shutdown because we are bound to it I0221 09:07:38.382907 462115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0221 09:07:38.392391 462115 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0221 09:07:38.405004 462115 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0221 09:07:38.482371 462115 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0221 09:07:38.560931 462115 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 09:07:38.571438 462115 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0221 09:07:38.652075 462115 ssh_runner.go:195] Run: sudo systemctl start docker I0221 09:07:38.661857 462115 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 09:07:38.700829 462115 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 09:07:38.745158 462115 out.go:203] * Preparing Kubernetes v1.23.4 on Docker 20.10.12 ... I0221 09:07:38.745227 462115 cli_runner.go:133] Run: docker network inspect kubenet-20220221084933-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 09:07:38.778053 462115 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts I0221 09:07:38.781345 462115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 09:07:38.792680 462115 out.go:176] - kubelet.housekeeping-interval=5m I0221 09:07:38.792752 462115 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 09:07:38.792829 462115 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0221 09:07:38.825837 462115 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.4 k8s.gcr.io/kube-proxy:v1.23.4 k8s.gcr.io/kube-controller-manager:v1.23.4 k8s.gcr.io/kube-scheduler:v1.23.4 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0221 09:07:38.825858 462115 docker.go:537] Images already preloaded, skipping extraction I0221 09:07:38.825905 462115 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0221 09:07:38.858580 462115 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.4 k8s.gcr.io/kube-proxy:v1.23.4 k8s.gcr.io/kube-controller-manager:v1.23.4 k8s.gcr.io/kube-scheduler:v1.23.4 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0221 09:07:38.858604 462115 cache_images.go:84] Images are preloaded, skipping loading I0221 09:07:38.858644 462115 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0221 09:07:38.945322 462115 cni.go:89] network plugin configured as "kubenet", returning disabled I0221 09:07:38.945346 462115 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0221 09:07:38.945363 462115 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.23.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubenet-20220221084933-6550 NodeName:kubenet-20220221084933-6550 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0221 09:07:38.945517 462115 kubeadm.go:162] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.76.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "kubenet-20220221084933-6550" kubeletExtraArgs: node-ip: 192.168.76.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.76.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.23.4 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0221 09:07:38.945595 462115 kubeadm.go:936] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.23.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubenet-20220221084933-6550 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=kubenet --node-ip=192.168.76.2 --pod-cidr=10.244.0.0/16 [Install] config: {KubernetesVersion:v1.23.4 ClusterName:kubenet-20220221084933-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0221 09:07:38.945645 462115 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.4 I0221 09:07:38.953183 462115 binaries.go:44] Found k8s binaries, skipping transfer I0221 09:07:38.953251 462115 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0221 09:07:38.960848 462115 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes) I0221 09:07:38.974225 462115 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0221 09:07:38.987445 462115 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2049 bytes) I0221 09:07:39.000608 462115 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts I0221 09:07:39.003705 462115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 09:07:39.013379 462115 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550 for IP: 192.168.76.2 I0221 09:07:39.013475 462115 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key I0221 09:07:39.013519 462115 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key I0221 09:07:39.013564 462115 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.key I0221 09:07:39.013579 462115 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt with IP's: [] I0221 09:07:39.346317 462115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt ... I0221 09:07:39.346346 462115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: {Name:mkb4325f5289a5f6ad4c171aa035b58192e1b4ea Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:07:39.346548 462115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.key ... I0221 09:07:39.346562 462115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.key: {Name:mkac2cbd9f4db250a8ffc020a7da89dce1a50dbe Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:07:39.346647 462115 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/apiserver.key.31bdca25 I0221 09:07:39.346664 462115 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1] I0221 09:07:39.436480 462115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/apiserver.crt.31bdca25 ... I0221 09:07:39.436514 462115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/apiserver.crt.31bdca25: {Name:mkdad2ce4bce31598ddfebae3d7e9100b4287fc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:07:39.436706 462115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/apiserver.key.31bdca25 ... I0221 09:07:39.436721 462115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/apiserver.key.31bdca25: {Name:mkbccf5fe005f13ded3079d17293c9590de20164 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:07:39.436793 462115 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/apiserver.crt I0221 09:07:39.436851 462115 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/apiserver.key I0221 09:07:39.436893 462115 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/proxy-client.key I0221 09:07:39.436902 462115 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/proxy-client.crt with IP's: [] I0221 09:07:39.558306 462115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/proxy-client.crt ... I0221 09:07:39.558337 462115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/proxy-client.crt: {Name:mk451b4345ee41aef79f4374e4a11d13e02c5188 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:07:39.558521 462115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/proxy-client.key ... I0221 09:07:39.558536 462115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/proxy-client.key: {Name:mkc5644b090fa5bea7b910426f0cabeec97f042e Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:07:39.558703 462115 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem (1338 bytes) W0221 09:07:39.558744 462115 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550_empty.pem, impossibly tiny 0 bytes I0221 09:07:39.558757 462115 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem (1679 bytes) I0221 09:07:39.558779 462115 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem (1082 bytes) I0221 09:07:39.558802 462115 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem (1123 bytes) I0221 09:07:39.558824 462115 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem (1675 bytes) I0221 09:07:39.558865 462115 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem (1708 bytes) I0221 09:07:39.559723 462115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0221 09:07:39.580056 462115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0221 09:07:39.598243 462115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0221 09:07:39.616413 462115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I0221 09:07:39.634932 462115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0221 09:07:39.652842 462115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0221 09:07:39.671073 462115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0221 09:07:39.688626 462115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0221 09:07:39.706116 462115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem --> /usr/share/ca-certificates/6550.pem (1338 bytes) I0221 09:07:39.724730 462115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /usr/share/ca-certificates/65502.pem (1708 bytes) I0221 09:07:39.743321 462115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0221 09:07:39.761318 462115 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0221 09:07:39.774674 462115 ssh_runner.go:195] Run: openssl version I0221 09:07:39.779702 462115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6550.pem && ln -fs /usr/share/ca-certificates/6550.pem /etc/ssl/certs/6550.pem" I0221 09:07:39.787119 462115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6550.pem I0221 09:07:39.790153 462115 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Feb 21 08:30 /usr/share/ca-certificates/6550.pem I0221 09:07:39.790202 462115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6550.pem I0221 09:07:39.794986 462115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6550.pem /etc/ssl/certs/51391683.0" I0221 09:07:39.802359 462115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65502.pem && ln -fs /usr/share/ca-certificates/65502.pem /etc/ssl/certs/65502.pem" I0221 09:07:39.810020 462115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65502.pem I0221 09:07:39.813156 462115 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Feb 21 08:30 /usr/share/ca-certificates/65502.pem I0221 09:07:39.813199 462115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65502.pem I0221 09:07:39.818127 462115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65502.pem /etc/ssl/certs/3ec20f2e.0" I0221 09:07:39.825636 462115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0221 09:07:39.833153 462115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0221 09:07:39.836239 462115 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb 21 08:26 /usr/share/ca-certificates/minikubeCA.pem I0221 09:07:39.836288 462115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0221 09:07:39.841274 462115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0221 09:07:39.848658 462115 kubeadm.go:391] StartCluster: {Name:kubenet-20220221084933-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:kubenet-20220221084933-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 09:07:39.848790 462115 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0221 09:07:39.880196 462115 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0221 09:07:39.887505 462115 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0221 09:07:39.894616 462115 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver I0221 09:07:39.894671 462115 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0221 09:07:39.901631 462115 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0221 09:07:39.901669 462115 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0221 09:07:36.955571 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:38.956296 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:37.435269 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:39.934868 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:41.935305 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:40.422323 462115 out.go:203] - Generating certificates and keys ... I0221 09:07:43.156638 462115 out.go:203] - Booting up control plane ... I0221 09:07:41.455426 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:43.456495 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:45.955475 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:44.435651 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:46.934971 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:47.955535 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:49.956466 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:48.935532 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:50.935805 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:50.701742 462115 out.go:203] - Configuring RBAC rules ... I0221 09:07:51.116058 462115 cni.go:89] network plugin configured as "kubenet", returning disabled I0221 09:07:51.116111 462115 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0221 09:07:51.116187 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:07:51.116187 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=9a4c39b14752448ee8b5da7f8bb397c2a16d9ea9 minikube.k8s.io/name=kubenet-20220221084933-6550 minikube.k8s.io/updated_at=2022_02_21T09_07_51_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:07:51.609926 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:07:51.609927 462115 ops.go:34] apiserver oom_adj: -16 I0221 09:07:52.169159 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:07:52.668865 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:07:53.169400 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:07:53.668642 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:07:54.168995 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:07:54.668863 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:07:55.169566 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:07:51.956502 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:54.455635 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:53.435434 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:55.934804 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:55.669358 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:07:56.168935 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:07:56.668724 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:07:57.168562 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:07:57.669164 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:07:58.169111 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:07:58.669058 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:07:59.169144 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:07:59.669192 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:08:00.168848 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:07:56.456157 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:58.955830 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:08:00.956100 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:57.935055 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:59.935752 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:08:00.669438 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:08:01.168952 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:08:01.669422 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:08:02.168600 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:08:02.668524 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:08:03.169224 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:08:03.669191 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:08:03.726860 462115 kubeadm.go:1020] duration metric: took 12.610725464s to wait for elevateKubeSystemPrivileges. I0221 09:08:03.726892 462115 kubeadm.go:393] StartCluster complete in 23.878240603s I0221 09:08:03.726910 462115 settings.go:142] acquiring lock: {Name:mk4400923ef35d7d80e21aa000bc7683aef0fb09 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:08:03.727040 462115 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 09:08:03.729324 462115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig: {Name:mkbaf7b3f4ffbe8a9d57707d423380523fa909e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:08:04.249961 462115 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kubenet-20220221084933-6550" rescaled to 1 I0221 09:08:04.250026 462115 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true} I0221 09:08:04.251879 462115 out.go:176] * Verifying Kubernetes components... I0221 09:08:04.250133 462115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0221 09:08:04.250165 462115 addons.go:415] enableAddons start: toEnable=map[], additional=[] I0221 09:08:04.250295 462115 config.go:176] Loaded profile config "kubenet-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:08:04.252003 462115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0221 09:08:04.252112 462115 addons.go:65] Setting storage-provisioner=true in profile "kubenet-20220221084933-6550" I0221 09:08:04.252218 462115 addons.go:153] Setting addon storage-provisioner=true in "kubenet-20220221084933-6550" W0221 09:08:04.252235 462115 addons.go:165] addon storage-provisioner should already be in state true I0221 09:08:04.252123 462115 addons.go:65] Setting default-storageclass=true in profile "kubenet-20220221084933-6550" I0221 09:08:04.252284 462115 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubenet-20220221084933-6550" I0221 09:08:04.253268 462115 cli_runner.go:133] Run: docker container inspect kubenet-20220221084933-6550 --format={{.State.Status}} I0221 09:08:04.255154 462115 host.go:66] Checking if "kubenet-20220221084933-6550" exists ... I0221 09:08:04.257267 462115 cli_runner.go:133] Run: docker container inspect kubenet-20220221084933-6550 --format={{.State.Status}} I0221 09:08:04.299560 462115 out.go:176] - Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0221 09:08:04.299670 462115 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml I0221 09:08:04.299690 462115 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0221 09:08:04.299737 462115 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220221084933-6550 I0221 09:08:04.305379 462115 addons.go:153] Setting addon default-storageclass=true in "kubenet-20220221084933-6550" W0221 09:08:04.305411 462115 addons.go:165] addon default-storageclass should already be in state true I0221 09:08:04.305440 462115 host.go:66] Checking if "kubenet-20220221084933-6550" exists ... I0221 09:08:04.305926 462115 cli_runner.go:133] Run: docker container inspect kubenet-20220221084933-6550 --format={{.State.Status}} I0221 09:08:04.346696 462115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49399 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/kubenet-20220221084933-6550/id_rsa Username:docker} I0221 09:08:04.356870 462115 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml I0221 09:08:04.356904 462115 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0221 09:08:04.356958 462115 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220221084933-6550 I0221 09:08:04.390212 462115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49399 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/kubenet-20220221084933-6550/id_rsa Username:docker} I0221 09:08:04.434817 462115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.76.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0221 09:08:04.437465 462115 node_ready.go:35] waiting up to 5m0s for node "kubenet-20220221084933-6550" to be "Ready" ... I0221 09:08:04.443494 462115 node_ready.go:49] node "kubenet-20220221084933-6550" has status "Ready":"True" I0221 09:08:04.443524 462115 node_ready.go:38] duration metric: took 6.03247ms waiting for node "kubenet-20220221084933-6550" to be "Ready" ... I0221 09:08:04.443536 462115 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 09:08:04.512531 462115 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-cx6k8" in "kube-system" namespace to be "Ready" ... I0221 09:08:04.531502 462115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0221 09:08:04.624839 462115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0221 09:08:02.957327 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:08:05.456300 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:08:05.925268 462115 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.76.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.490406551s) I0221 09:08:05.925309 462115 start.go:777] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS I0221 09:08:06.036250 462115 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.41137268s) I0221 09:08:06.036367 462115 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.504836543s) I0221 09:08:02.434827 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:08:04.436572 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:08:06.935506 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:08:06.038177 462115 out.go:176] * Enabled addons: default-storageclass, storage-provisioner I0221 09:08:06.038202 462115 addons.go:417] enableAddons completed in 1.78804558s I0221 09:08:06.534275 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:09.032356 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:07.955229 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:08:09.956217 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:08:09.434727 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:08:11.435464 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:08:11.531561 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:13.532264 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:12.455629 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:08:14.455947 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:08:13.437319 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:08:15.934478 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:08:15.939175 442801 pod_ready.go:81] duration metric: took 4m0.019101325s waiting for pod "coredns-64897985d-mr75l" in "kube-system" namespace to be "Ready" ... E0221 09:08:15.939200 442801 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition I0221 09:08:15.939209 442801 pod_ready.go:78] waiting up to 5m0s for pod "etcd-enable-default-cni-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:08:15.943146 442801 pod_ready.go:92] pod "etcd-enable-default-cni-20220221084933-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:08:15.943167 442801 pod_ready.go:81] duration metric: took 3.9518ms waiting for pod "etcd-enable-default-cni-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:08:15.943176 442801 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-enable-default-cni-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:08:15.946802 442801 pod_ready.go:92] pod "kube-apiserver-enable-default-cni-20220221084933-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:08:15.946818 442801 pod_ready.go:81] duration metric: took 3.636488ms waiting for pod "kube-apiserver-enable-default-cni-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:08:15.946827 442801 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-enable-default-cni-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:08:15.950426 442801 pod_ready.go:92] pod "kube-controller-manager-enable-default-cni-20220221084933-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:08:15.950443 442801 pod_ready.go:81] duration metric: took 3.610411ms waiting for pod "kube-controller-manager-enable-default-cni-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:08:15.950451 442801 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-z67wt" in "kube-system" namespace to be "Ready" ... I0221 09:08:16.332833 442801 pod_ready.go:92] pod "kube-proxy-z67wt" in "kube-system" namespace has status "Ready":"True" I0221 09:08:16.332859 442801 pod_ready.go:81] duration metric: took 382.401522ms waiting for pod "kube-proxy-z67wt" in "kube-system" namespace to be "Ready" ... I0221 09:08:16.332869 442801 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-enable-default-cni-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:08:16.733188 442801 pod_ready.go:92] pod "kube-scheduler-enable-default-cni-20220221084933-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:08:16.733214 442801 pod_ready.go:81] duration metric: took 400.337647ms waiting for pod "kube-scheduler-enable-default-cni-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:08:16.733225 442801 pod_ready.go:38] duration metric: took 4m1.872423421s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 09:08:16.733251 442801 api_server.go:51] waiting for apiserver process to appear ... I0221 09:08:16.733309 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0221 09:08:16.768380 442801 logs.go:274] 1 containers: [22f36e8efd01] I0221 09:08:16.768445 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0221 09:08:16.800855 442801 logs.go:274] 1 containers: [2d52356b4d44] I0221 09:08:16.800921 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0221 09:08:16.837626 442801 logs.go:274] 1 containers: [3eab59e55df1] I0221 09:08:16.837689 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0221 09:08:16.873309 442801 logs.go:274] 1 containers: [6e0b11913ead] I0221 09:08:16.873374 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0221 09:08:16.906474 442801 logs.go:274] 1 containers: [b198c3fa1558] I0221 09:08:16.906554 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0221 09:08:16.939879 442801 logs.go:274] 0 containers: [] W0221 09:08:16.939899 442801 logs.go:276] No container was found matching "kubernetes-dashboard" I0221 09:08:16.939937 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0221 09:08:16.973485 442801 logs.go:274] 1 containers: [987fc4d25f59] I0221 09:08:16.973566 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0221 09:08:17.006616 442801 logs.go:274] 1 containers: [9da67fbcae63] I0221 09:08:17.006648 442801 logs.go:123] Gathering logs for kube-controller-manager [9da67fbcae63] ... I0221 09:08:17.006657 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9da67fbcae63" I0221 09:08:17.053540 442801 logs.go:123] Gathering logs for Docker ... I0221 09:08:17.053572 442801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0221 09:08:15.532346 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:17.532788 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:20.032086 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:16.456884 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:08:18.955296 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:08:20.957135 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:08:17.071249 442801 logs.go:123] Gathering logs for kubelet ... I0221 09:08:17.071285 442801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0221 09:08:17.132024 442801 logs.go:123] Gathering logs for describe nodes ... I0221 09:08:17.132066 442801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0221 09:08:17.209722 442801 logs.go:123] Gathering logs for kube-apiserver [22f36e8efd01] ... I0221 09:08:17.209756 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22f36e8efd01" I0221 09:08:17.251249 442801 logs.go:123] Gathering logs for kube-scheduler [6e0b11913ead] ... I0221 09:08:17.251280 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0b11913ead" I0221 09:08:17.292981 442801 logs.go:123] Gathering logs for storage-provisioner [987fc4d25f59] ... I0221 09:08:17.293018 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 987fc4d25f59" I0221 09:08:17.329067 442801 logs.go:123] Gathering logs for dmesg ... I0221 09:08:17.329100 442801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0221 09:08:17.358556 442801 logs.go:123] Gathering logs for etcd [2d52356b4d44] ... I0221 09:08:17.358591 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d52356b4d44" I0221 09:08:17.426854 442801 logs.go:123] Gathering logs for coredns [3eab59e55df1] ... I0221 09:08:17.426899 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3eab59e55df1" I0221 09:08:17.464666 442801 logs.go:123] Gathering logs for kube-proxy [b198c3fa1558] ... I0221 09:08:17.464693 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b198c3fa1558" I0221 09:08:17.501900 442801 logs.go:123] Gathering logs for container status ... I0221 09:08:17.501927 442801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0221 09:08:20.035109 442801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0221 09:08:20.056012 442801 api_server.go:71] duration metric: took 4m5.308939265s to wait for apiserver process to appear ... I0221 09:08:20.056038 442801 api_server.go:87] waiting for apiserver healthz status ... I0221 09:08:20.056088 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0221 09:08:20.088468 442801 logs.go:274] 1 containers: [22f36e8efd01] I0221 09:08:20.088542 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0221 09:08:20.122210 442801 logs.go:274] 1 containers: [2d52356b4d44] I0221 09:08:20.122296 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0221 09:08:20.158381 442801 logs.go:274] 1 containers: [3eab59e55df1] I0221 09:08:20.158463 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0221 09:08:20.196267 442801 logs.go:274] 1 containers: [6e0b11913ead] I0221 09:08:20.196344 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0221 09:08:20.233791 442801 logs.go:274] 1 containers: [b198c3fa1558] I0221 09:08:20.233865 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0221 09:08:20.284366 442801 logs.go:274] 0 containers: [] W0221 09:08:20.284395 442801 logs.go:276] No container was found matching "kubernetes-dashboard" I0221 09:08:20.284446 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0221 09:08:20.317999 442801 logs.go:274] 1 containers: [987fc4d25f59] I0221 09:08:20.318069 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0221 09:08:20.350848 442801 logs.go:274] 1 containers: [9da67fbcae63] I0221 09:08:20.350881 442801 logs.go:123] Gathering logs for kube-controller-manager [9da67fbcae63] ... I0221 09:08:20.350897 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9da67fbcae63" I0221 09:08:20.397231 442801 logs.go:123] Gathering logs for describe nodes ... I0221 09:08:20.397265 442801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0221 09:08:20.478264 442801 logs.go:123] Gathering logs for kube-apiserver [22f36e8efd01] ... I0221 09:08:20.478295 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22f36e8efd01" I0221 09:08:20.519692 442801 logs.go:123] Gathering logs for kube-scheduler [6e0b11913ead] ... I0221 09:08:20.519731 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0b11913ead" I0221 09:08:20.562951 442801 logs.go:123] Gathering logs for kube-proxy [b198c3fa1558] ... I0221 09:08:20.562980 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b198c3fa1558" I0221 09:08:20.598320 442801 logs.go:123] Gathering logs for storage-provisioner [987fc4d25f59] ... I0221 09:08:20.598355 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 987fc4d25f59" I0221 09:08:20.634456 442801 logs.go:123] Gathering logs for container status ... I0221 09:08:20.634484 442801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0221 09:08:20.665048 442801 logs.go:123] Gathering logs for kubelet ... I0221 09:08:20.665075 442801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0221 09:08:20.725872 442801 logs.go:123] Gathering logs for dmesg ... I0221 09:08:20.725912 442801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0221 09:08:20.756158 442801 logs.go:123] Gathering logs for etcd [2d52356b4d44] ... I0221 09:08:20.756192 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d52356b4d44" I0221 09:08:20.826595 442801 logs.go:123] Gathering logs for coredns [3eab59e55df1] ... I0221 09:08:20.826630 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3eab59e55df1" I0221 09:08:20.863311 442801 logs.go:123] Gathering logs for Docker ... I0221 09:08:20.863341 442801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0221 09:08:22.032139 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:24.033164 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:23.456009 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:08:25.456137 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:08:23.380697 442801 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ... I0221 09:08:23.386477 442801 api_server.go:266] https://192.168.58.2:8443/healthz returned 200: ok I0221 09:08:23.387402 442801 api_server.go:140] control plane version: v1.23.4 I0221 09:08:23.387422 442801 api_server.go:130] duration metric: took 3.331378972s to wait for apiserver health ... I0221 09:08:23.387430 442801 system_pods.go:43] waiting for kube-system pods to appear ... I0221 09:08:23.387474 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0221 09:08:23.421047 442801 logs.go:274] 1 containers: [22f36e8efd01] I0221 09:08:23.421115 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0221 09:08:23.454732 442801 logs.go:274] 1 containers: [2d52356b4d44] I0221 09:08:23.454820 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0221 09:08:23.487796 442801 logs.go:274] 1 containers: [3eab59e55df1] I0221 09:08:23.487856 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0221 09:08:23.521159 442801 logs.go:274] 1 containers: [6e0b11913ead] I0221 09:08:23.521229 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0221 09:08:23.554304 442801 logs.go:274] 1 containers: [b198c3fa1558] I0221 09:08:23.554365 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0221 09:08:23.586487 442801 logs.go:274] 0 containers: [] W0221 09:08:23.586516 442801 logs.go:276] No container was found matching "kubernetes-dashboard" I0221 09:08:23.586570 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0221 09:08:23.619535 442801 logs.go:274] 1 containers: [987fc4d25f59] I0221 09:08:23.619609 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0221 09:08:23.653221 442801 logs.go:274] 1 containers: [9da67fbcae63] I0221 09:08:23.653257 442801 logs.go:123] Gathering logs for Docker ... I0221 09:08:23.653267 442801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0221 09:08:23.672016 442801 logs.go:123] Gathering logs for dmesg ... I0221 09:08:23.672053 442801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0221 09:08:23.701424 442801 logs.go:123] Gathering logs for kube-apiserver [22f36e8efd01] ... I0221 09:08:23.701468 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22f36e8efd01" I0221 09:08:23.743991 442801 logs.go:123] Gathering logs for coredns [3eab59e55df1] ... I0221 09:08:23.744028 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3eab59e55df1" I0221 09:08:23.780569 442801 logs.go:123] Gathering logs for kube-controller-manager [9da67fbcae63] ... I0221 09:08:23.780619 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9da67fbcae63" I0221 09:08:23.827784 442801 logs.go:123] Gathering logs for kube-proxy [b198c3fa1558] ... I0221 09:08:23.827817 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b198c3fa1558" I0221 09:08:23.864039 442801 logs.go:123] Gathering logs for storage-provisioner [987fc4d25f59] ... I0221 09:08:23.864066 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 987fc4d25f59" I0221 09:08:23.898581 442801 logs.go:123] Gathering logs for container status ... I0221 09:08:23.898611 442801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0221 09:08:23.929719 442801 logs.go:123] Gathering logs for kubelet ... I0221 09:08:23.929752 442801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0221 09:08:23.993562 442801 logs.go:123] Gathering logs for describe nodes ... I0221 09:08:23.993601 442801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0221 09:08:24.072192 442801 logs.go:123] Gathering logs for etcd [2d52356b4d44] ... I0221 09:08:24.072221 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d52356b4d44" I0221 09:08:24.140746 442801 logs.go:123] Gathering logs for kube-scheduler [6e0b11913ead] ... I0221 09:08:24.140783 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0b11913ead" I0221 09:08:26.688110 442801 system_pods.go:59] 7 kube-system pods found I0221 09:08:26.688145 442801 system_pods.go:61] "coredns-64897985d-mr75l" [0cfd24b7-95f1-482c-bcb1-3beb08eebcac] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:08:26.688151 442801 system_pods.go:61] "etcd-enable-default-cni-20220221084933-6550" [dfe7d3c6-2aee-415d-a22b-2f35c061c3c6] Running I0221 09:08:26.688156 442801 system_pods.go:61] "kube-apiserver-enable-default-cni-20220221084933-6550" [d2a36bb5-d5a4-48b0-b8ea-12bbe483aa51] Running I0221 09:08:26.688160 442801 system_pods.go:61] "kube-controller-manager-enable-default-cni-20220221084933-6550" [f17938b7-182f-4b24-b475-c222cdd5babc] Running I0221 09:08:26.688165 442801 system_pods.go:61] "kube-proxy-z67wt" [5988151c-b7ae-4c8d-9095-09aeb868ab3c] Running I0221 09:08:26.688173 442801 system_pods.go:61] "kube-scheduler-enable-default-cni-20220221084933-6550" [0d16e06a-b0a2-4266-bca7-1f5d7e5fc9a7] Running I0221 09:08:26.688180 442801 system_pods.go:61] "storage-provisioner" [8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:08:26.688185 442801 system_pods.go:74] duration metric: took 3.300751331s to wait for pod list to return data ... I0221 09:08:26.688204 442801 default_sa.go:34] waiting for default service account to be created ... I0221 09:08:26.690563 442801 default_sa.go:45] found service account: "default" I0221 09:08:26.690583 442801 default_sa.go:55] duration metric: took 2.374957ms for default service account to be created ... I0221 09:08:26.690589 442801 system_pods.go:116] waiting for k8s-apps to be running ... I0221 09:08:26.694728 442801 system_pods.go:86] 7 kube-system pods found I0221 09:08:26.694761 442801 system_pods.go:89] "coredns-64897985d-mr75l" [0cfd24b7-95f1-482c-bcb1-3beb08eebcac] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:08:26.694771 442801 system_pods.go:89] "etcd-enable-default-cni-20220221084933-6550" [dfe7d3c6-2aee-415d-a22b-2f35c061c3c6] Running I0221 09:08:26.694781 442801 system_pods.go:89] "kube-apiserver-enable-default-cni-20220221084933-6550" [d2a36bb5-d5a4-48b0-b8ea-12bbe483aa51] Running I0221 09:08:26.694788 442801 system_pods.go:89] "kube-controller-manager-enable-default-cni-20220221084933-6550" [f17938b7-182f-4b24-b475-c222cdd5babc] Running I0221 09:08:26.694798 442801 system_pods.go:89] "kube-proxy-z67wt" [5988151c-b7ae-4c8d-9095-09aeb868ab3c] Running I0221 09:08:26.694806 442801 system_pods.go:89] "kube-scheduler-enable-default-cni-20220221084933-6550" [0d16e06a-b0a2-4266-bca7-1f5d7e5fc9a7] Running I0221 09:08:26.694821 442801 system_pods.go:89] "storage-provisioner" [8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:08:26.694831 442801 system_pods.go:126] duration metric: took 4.238216ms to wait for k8s-apps to be running ... I0221 09:08:26.694840 442801 system_svc.go:44] waiting for kubelet service to be running .... I0221 09:08:26.694893 442801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0221 09:08:26.705033 442801 system_svc.go:56] duration metric: took 10.186494ms WaitForService to wait for kubelet. I0221 09:08:26.705054 442801 kubeadm.go:548] duration metric: took 4m11.957986174s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ... I0221 09:08:26.705090 442801 node_conditions.go:102] verifying NodePressure condition ... I0221 09:08:26.708537 442801 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki I0221 09:08:26.708562 442801 node_conditions.go:123] node cpu capacity is 8 I0221 09:08:26.708574 442801 node_conditions.go:105] duration metric: took 3.473833ms to run NodePressure ... I0221 09:08:26.708582 442801 start.go:213] waiting for startup goroutines ... I0221 09:08:26.743675 442801 start.go:496] kubectl: 1.23.4, cluster: 1.23.4 (minor skew: 0) I0221 09:08:26.746370 442801 out.go:176] * Done! kubectl is now configured to use "enable-default-cni-20220221084933-6550" cluster and "default" namespace by default I0221 09:08:26.532003 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:28.532271 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:27.955843 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:08:29.957347 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:08:30.461783 450843 pod_ready.go:81] duration metric: took 4m0.019873237s waiting for pod "coredns-64897985d-7jshp" in "kube-system" namespace to be "Ready" ... E0221 09:08:30.461805 450843 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition I0221 09:08:30.461815 450843 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-tl8l4" in "kube-system" namespace to be "Ready" ... I0221 09:08:30.464015 450843 pod_ready.go:97] error getting pod "coredns-64897985d-tl8l4" in "kube-system" namespace (skipping!): pods "coredns-64897985d-tl8l4" not found I0221 09:08:30.464043 450843 pod_ready.go:81] duration metric: took 2.221437ms waiting for pod "coredns-64897985d-tl8l4" in "kube-system" namespace to be "Ready" ... E0221 09:08:30.464052 450843 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-64897985d-tl8l4" in "kube-system" namespace (skipping!): pods "coredns-64897985d-tl8l4" not found I0221 09:08:30.464060 450843 pod_ready.go:78] waiting up to 5m0s for pod "etcd-bridge-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:08:30.468684 450843 pod_ready.go:92] pod "etcd-bridge-20220221084933-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:08:30.468703 450843 pod_ready.go:81] duration metric: took 4.62867ms waiting for pod "etcd-bridge-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:08:30.468712 450843 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-bridge-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:08:30.473506 450843 pod_ready.go:92] pod "kube-apiserver-bridge-20220221084933-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:08:30.473534 450843 pod_ready.go:81] duration metric: took 4.815616ms waiting for pod "kube-apiserver-bridge-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:08:30.473547 450843 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-bridge-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:08:30.655362 450843 pod_ready.go:92] pod "kube-controller-manager-bridge-20220221084933-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:08:30.655389 450843 pod_ready.go:81] duration metric: took 181.833546ms waiting for pod "kube-controller-manager-bridge-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:08:30.655404 450843 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-pzvfl" in "kube-system" namespace to be "Ready" ... I0221 09:08:30.533538 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:33.032221 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:35.032654 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:31.055302 450843 pod_ready.go:92] pod "kube-proxy-pzvfl" in "kube-system" namespace has status "Ready":"True" I0221 09:08:31.055329 450843 pod_ready.go:81] duration metric: took 399.916434ms waiting for pod "kube-proxy-pzvfl" in "kube-system" namespace to be "Ready" ... I0221 09:08:31.055341 450843 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-bridge-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:08:31.454924 450843 pod_ready.go:92] pod "kube-scheduler-bridge-20220221084933-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:08:31.454951 450843 pod_ready.go:81] duration metric: took 399.602576ms waiting for pod "kube-scheduler-bridge-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:08:31.454961 450843 pod_ready.go:38] duration metric: took 4m1.022736723s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 09:08:31.454988 450843 api_server.go:51] waiting for apiserver process to appear ... I0221 09:08:31.455055 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0221 09:08:31.491695 450843 logs.go:274] 1 containers: [6a850a90d786] I0221 09:08:31.491756 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0221 09:08:31.531102 450843 logs.go:274] 1 containers: [5eb857f7738e] I0221 09:08:31.531209 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0221 09:08:31.571986 450843 logs.go:274] 1 containers: [8eb32092067f] I0221 09:08:31.572064 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0221 09:08:31.603731 450843 logs.go:274] 1 containers: [6e69145b30ad] I0221 09:08:31.603809 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0221 09:08:31.642830 450843 logs.go:274] 1 containers: [cd31aa9c0c74] I0221 09:08:31.642911 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0221 09:08:31.680618 450843 logs.go:274] 0 containers: [] W0221 09:08:31.680640 450843 logs.go:276] No container was found matching "kubernetes-dashboard" I0221 09:08:31.680695 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0221 09:08:31.716281 450843 logs.go:274] 2 containers: [dedfecc4ece7 40d03e6cd1a3] I0221 09:08:31.716379 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0221 09:08:31.757092 450843 logs.go:274] 1 containers: [d092f7171bc6] I0221 09:08:31.757132 450843 logs.go:123] Gathering logs for kubelet ... I0221 09:08:31.757143 450843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0221 09:08:31.825700 450843 logs.go:123] Gathering logs for dmesg ... I0221 09:08:31.825746 450843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0221 09:08:31.858480 450843 logs.go:123] Gathering logs for coredns [8eb32092067f] ... I0221 09:08:31.858519 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eb32092067f" I0221 09:08:31.896488 450843 logs.go:123] Gathering logs for storage-provisioner [40d03e6cd1a3] ... I0221 09:08:31.896515 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40d03e6cd1a3" I0221 09:08:31.936833 450843 logs.go:123] Gathering logs for kube-controller-manager [d092f7171bc6] ... I0221 09:08:31.936864 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d092f7171bc6" I0221 09:08:31.986267 450843 logs.go:123] Gathering logs for Docker ... I0221 09:08:31.986300 450843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0221 09:08:32.003785 450843 logs.go:123] Gathering logs for container status ... I0221 09:08:32.003828 450843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0221 09:08:32.034717 450843 logs.go:123] Gathering logs for describe nodes ... I0221 09:08:32.034746 450843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0221 09:08:32.113035 450843 logs.go:123] Gathering logs for kube-apiserver [6a850a90d786] ... I0221 09:08:32.113066 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a850a90d786" I0221 09:08:32.154757 450843 logs.go:123] Gathering logs for etcd [5eb857f7738e] ... I0221 09:08:32.154788 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5eb857f7738e" I0221 09:08:32.195113 450843 logs.go:123] Gathering logs for kube-scheduler [6e69145b30ad] ... I0221 09:08:32.195148 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e69145b30ad" I0221 09:08:32.238193 450843 logs.go:123] Gathering logs for kube-proxy [cd31aa9c0c74] ... I0221 09:08:32.238227 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd31aa9c0c74" I0221 09:08:32.276303 450843 logs.go:123] Gathering logs for storage-provisioner [dedfecc4ece7] ... I0221 09:08:32.276341 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dedfecc4ece7" I0221 09:08:34.817047 450843 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0221 09:08:34.840119 450843 api_server.go:71] duration metric: took 4m4.619098866s to wait for apiserver process to appear ... I0221 09:08:34.840149 450843 api_server.go:87] waiting for apiserver healthz status ... I0221 09:08:34.840199 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0221 09:08:34.875740 450843 logs.go:274] 1 containers: [6a850a90d786] I0221 09:08:34.875812 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0221 09:08:34.910872 450843 logs.go:274] 1 containers: [5eb857f7738e] I0221 09:08:34.910947 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0221 09:08:34.944892 450843 logs.go:274] 1 containers: [8eb32092067f] I0221 09:08:34.944960 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0221 09:08:34.977163 450843 logs.go:274] 1 containers: [6e69145b30ad] I0221 09:08:34.977221 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0221 09:08:35.010985 450843 logs.go:274] 1 containers: [cd31aa9c0c74] I0221 09:08:35.011097 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0221 09:08:35.046325 450843 logs.go:274] 0 containers: [] W0221 09:08:35.046354 450843 logs.go:276] No container was found matching "kubernetes-dashboard" I0221 09:08:35.046395 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0221 09:08:35.079716 450843 logs.go:274] 1 containers: [dedfecc4ece7] I0221 09:08:35.079795 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0221 09:08:35.113823 450843 logs.go:274] 1 containers: [d092f7171bc6] I0221 09:08:35.113862 450843 logs.go:123] Gathering logs for describe nodes ... I0221 09:08:35.113877 450843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0221 09:08:35.189171 450843 logs.go:123] Gathering logs for coredns [8eb32092067f] ... I0221 09:08:35.189199 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eb32092067f" I0221 09:08:35.225009 450843 logs.go:123] Gathering logs for kube-scheduler [6e69145b30ad] ... I0221 09:08:35.225039 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e69145b30ad" I0221 09:08:35.271029 450843 logs.go:123] Gathering logs for kube-proxy [cd31aa9c0c74] ... I0221 09:08:35.271066 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd31aa9c0c74" I0221 09:08:35.307725 450843 logs.go:123] Gathering logs for kube-controller-manager [d092f7171bc6] ... I0221 09:08:35.307772 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d092f7171bc6" I0221 09:08:35.355496 450843 logs.go:123] Gathering logs for kubelet ... I0221 09:08:35.355531 450843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0221 09:08:35.417146 450843 logs.go:123] Gathering logs for kube-apiserver [6a850a90d786] ... I0221 09:08:35.417244 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a850a90d786" I0221 09:08:35.459560 450843 logs.go:123] Gathering logs for etcd [5eb857f7738e] ... I0221 09:08:35.459598 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5eb857f7738e" I0221 09:08:35.498980 450843 logs.go:123] Gathering logs for storage-provisioner [dedfecc4ece7] ... I0221 09:08:35.499046 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dedfecc4ece7" I0221 09:08:35.536957 450843 logs.go:123] Gathering logs for Docker ... I0221 09:08:35.536986 450843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0221 09:08:35.553551 450843 logs.go:123] Gathering logs for container status ... I0221 09:08:35.553587 450843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0221 09:08:35.585466 450843 logs.go:123] Gathering logs for dmesg ... I0221 09:08:35.585502 450843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0221 09:08:37.532234 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:40.032852 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:38.116914 450843 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ... I0221 09:08:38.122687 450843 api_server.go:266] https://192.168.67.2:8443/healthz returned 200: ok I0221 09:08:38.123855 450843 api_server.go:140] control plane version: v1.23.4 I0221 09:08:38.123880 450843 api_server.go:130] duration metric: took 3.28372628s to wait for apiserver health ... I0221 09:08:38.123889 450843 system_pods.go:43] waiting for kube-system pods to appear ... I0221 09:08:38.123935 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0221 09:08:38.159427 450843 logs.go:274] 1 containers: [6a850a90d786] I0221 09:08:38.159494 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0221 09:08:38.193788 450843 logs.go:274] 1 containers: [5eb857f7738e] I0221 09:08:38.193865 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0221 09:08:38.229739 450843 logs.go:274] 1 containers: [8eb32092067f] I0221 09:08:38.229817 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0221 09:08:38.265319 450843 logs.go:274] 1 containers: [6e69145b30ad] I0221 09:08:38.265402 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0221 09:08:38.299845 450843 logs.go:274] 1 containers: [cd31aa9c0c74] I0221 09:08:38.299913 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0221 09:08:38.335291 450843 logs.go:274] 0 containers: [] W0221 09:08:38.335317 450843 logs.go:276] No container was found matching "kubernetes-dashboard" I0221 09:08:38.335371 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0221 09:08:38.371605 450843 logs.go:274] 1 containers: [dedfecc4ece7] I0221 09:08:38.371697 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0221 09:08:38.410349 450843 logs.go:274] 1 containers: [d092f7171bc6] I0221 09:08:38.410384 450843 logs.go:123] Gathering logs for etcd [5eb857f7738e] ... I0221 09:08:38.410398 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5eb857f7738e" I0221 09:08:38.455212 450843 logs.go:123] Gathering logs for kube-scheduler [6e69145b30ad] ... I0221 09:08:38.455258 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e69145b30ad" I0221 09:08:38.521301 450843 logs.go:123] Gathering logs for kube-proxy [cd31aa9c0c74] ... I0221 09:08:38.521339 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd31aa9c0c74" I0221 09:08:38.558468 450843 logs.go:123] Gathering logs for storage-provisioner [dedfecc4ece7] ... I0221 09:08:38.558494 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dedfecc4ece7" I0221 09:08:38.595044 450843 logs.go:123] Gathering logs for kube-controller-manager [d092f7171bc6] ... I0221 09:08:38.595079 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d092f7171bc6" I0221 09:08:38.643023 450843 logs.go:123] Gathering logs for container status ... I0221 09:08:38.643061 450843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0221 09:08:38.673174 450843 logs.go:123] Gathering logs for dmesg ... I0221 09:08:38.673205 450843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0221 09:08:38.705820 450843 logs.go:123] Gathering logs for describe nodes ... I0221 09:08:38.705854 450843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0221 09:08:38.783647 450843 logs.go:123] Gathering logs for kube-apiserver [6a850a90d786] ... I0221 09:08:38.783681 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a850a90d786" I0221 09:08:38.824580 450843 logs.go:123] Gathering logs for coredns [8eb32092067f] ... I0221 09:08:38.824618 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eb32092067f" I0221 09:08:38.861663 450843 logs.go:123] Gathering logs for Docker ... I0221 09:08:38.861694 450843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0221 09:08:38.878877 450843 logs.go:123] Gathering logs for kubelet ... I0221 09:08:38.878909 450843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0221 09:08:41.444924 450843 system_pods.go:59] 7 kube-system pods found I0221 09:08:41.444986 450843 system_pods.go:61] "coredns-64897985d-7jshp" [8d3d6c95-cecd-4c5c-b6a5-481f281a9c9c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:08:41.445006 450843 system_pods.go:61] "etcd-bridge-20220221084933-6550" [6405e54f-2102-4390-9bac-d18668f32149] Running I0221 09:08:41.445022 450843 system_pods.go:61] "kube-apiserver-bridge-20220221084933-6550" [4ce115ae-793f-4994-a0be-928e77985675] Running I0221 09:08:41.445034 450843 system_pods.go:61] "kube-controller-manager-bridge-20220221084933-6550" [1e23af6e-a828-4974-ac87-c367b69697d6] Running I0221 09:08:41.445044 450843 system_pods.go:61] "kube-proxy-pzvfl" [1d716cc7-064a-4439-88b1-5d131874760e] Running I0221 09:08:41.445058 450843 system_pods.go:61] "kube-scheduler-bridge-20220221084933-6550" [63fb1f89-2553-4c6d-99a2-fb69ac76690f] Running I0221 09:08:41.445073 450843 system_pods.go:61] "storage-provisioner" [2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:08:41.445084 450843 system_pods.go:74] duration metric: took 3.321189782s to wait for pod list to return data ... I0221 09:08:41.445098 450843 default_sa.go:34] waiting for default service account to be created ... I0221 09:08:41.447570 450843 default_sa.go:45] found service account: "default" I0221 09:08:41.447591 450843 default_sa.go:55] duration metric: took 2.485246ms for default service account to be created ... I0221 09:08:41.447598 450843 system_pods.go:116] waiting for k8s-apps to be running ... I0221 09:08:41.451546 450843 system_pods.go:86] 7 kube-system pods found I0221 09:08:41.451573 450843 system_pods.go:89] "coredns-64897985d-7jshp" [8d3d6c95-cecd-4c5c-b6a5-481f281a9c9c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:08:41.451580 450843 system_pods.go:89] "etcd-bridge-20220221084933-6550" [6405e54f-2102-4390-9bac-d18668f32149] Running I0221 09:08:41.451585 450843 system_pods.go:89] "kube-apiserver-bridge-20220221084933-6550" [4ce115ae-793f-4994-a0be-928e77985675] Running I0221 09:08:41.451589 450843 system_pods.go:89] "kube-controller-manager-bridge-20220221084933-6550" [1e23af6e-a828-4974-ac87-c367b69697d6] Running I0221 09:08:41.451593 450843 system_pods.go:89] "kube-proxy-pzvfl" [1d716cc7-064a-4439-88b1-5d131874760e] Running I0221 09:08:41.451597 450843 system_pods.go:89] "kube-scheduler-bridge-20220221084933-6550" [63fb1f89-2553-4c6d-99a2-fb69ac76690f] Running I0221 09:08:41.451602 450843 system_pods.go:89] "storage-provisioner" [2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:08:41.451612 450843 system_pods.go:126] duration metric: took 4.010324ms to wait for k8s-apps to be running ... I0221 09:08:41.451626 450843 system_svc.go:44] waiting for kubelet service to be running .... I0221 09:08:41.451661 450843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0221 09:08:41.461622 450843 system_svc.go:56] duration metric: took 9.989373ms WaitForService to wait for kubelet. I0221 09:08:41.461652 450843 kubeadm.go:548] duration metric: took 4m11.240635372s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ... I0221 09:08:41.461680 450843 node_conditions.go:102] verifying NodePressure condition ... I0221 09:08:41.464863 450843 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki I0221 09:08:41.464907 450843 node_conditions.go:123] node cpu capacity is 8 I0221 09:08:41.464917 450843 node_conditions.go:105] duration metric: took 3.227765ms to run NodePressure ... I0221 09:08:41.464926 450843 start.go:213] waiting for startup goroutines ... I0221 09:08:41.499178 450843 start.go:496] kubectl: 1.23.4, cluster: 1.23.4 (minor skew: 0) I0221 09:08:41.501675 450843 out.go:176] * Done! kubectl is now configured to use "bridge-20220221084933-6550" cluster and "default" namespace by default I0221 09:08:42.033362 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:44.034484 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:46.532343 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:49.032155 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:51.033187 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:53.532405 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:56.032905 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:58.532721 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:09:00.532794 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:09:03.032150 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:09:05.032898 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:09:07.532448 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:09:10.032749 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:09:12.532590 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:09:15.033767 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:09:17.532289 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:09:20.032211 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:09:22.532201 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:09:25.032677 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:09:27.032803 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:09:29.531735 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:09:31.531982 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:09:34.033899 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:09:36.532071 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:09:39.032274 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" * * ==> Docker <== * -- Logs begin at Mon 2022-02-21 09:02:54 UTC, end at Mon 2022-02-21 09:09:44 UTC. -- Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[214]: time="2022-02-21T09:02:56.244386588Z" level=info msg="Daemon shutdown complete" Feb 21 09:02:56 kindnet-20220221084934-6550 systemd[1]: docker.service: Succeeded. Feb 21 09:02:56 kindnet-20220221084934-6550 systemd[1]: Stopped Docker Application Container Engine. Feb 21 09:02:56 kindnet-20220221084934-6550 systemd[1]: Starting Docker Application Container Engine... Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[460]: time="2022-02-21T09:02:56.334905197Z" level=info msg="Starting up" Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[460]: time="2022-02-21T09:02:56.336896710Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[460]: time="2022-02-21T09:02:56.336924458Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[460]: time="2022-02-21T09:02:56.336951738Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[460]: time="2022-02-21T09:02:56.336962413Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[460]: time="2022-02-21T09:02:56.338038722Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[460]: time="2022-02-21T09:02:56.338061880Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[460]: time="2022-02-21T09:02:56.338075622Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[460]: time="2022-02-21T09:02:56.338086812Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[460]: time="2022-02-21T09:02:56.342479756Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[460]: time="2022-02-21T09:02:56.348140456Z" level=warning msg="Your kernel does not support CPU realtime scheduler" Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[460]: time="2022-02-21T09:02:56.348164146Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[460]: time="2022-02-21T09:02:56.348169603Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[460]: time="2022-02-21T09:02:56.348328051Z" level=info msg="Loading containers: start." Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[460]: time="2022-02-21T09:02:56.430255625Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[460]: time="2022-02-21T09:02:56.466301525Z" level=info msg="Loading containers: done." Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[460]: time="2022-02-21T09:02:56.478724309Z" level=info msg="Docker daemon" commit=459d0df graphdriver(s)=overlay2 version=20.10.12 Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[460]: time="2022-02-21T09:02:56.478782822Z" level=info msg="Daemon has completed initialization" Feb 21 09:02:56 kindnet-20220221084934-6550 systemd[1]: Started Docker Application Container Engine. Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[460]: time="2022-02-21T09:02:56.496951660Z" level=info msg="API listen on [::]:2376" Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[460]: time="2022-02-21T09:02:56.500973648Z" level=info msg="API listen on /var/run/docker.sock" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 7a3bfe996b397 k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 6 minutes ago Running dnsutils 0 c3ef6cecbc94f f2ab40995bb27 a4ca41631cc7a 6 minutes ago Running coredns 0 4f60f93c15694 4a4b744690f25 6e38f40d628db 6 minutes ago Running storage-provisioner 0 2f181e31e7536 2ed4ff0a0f504 kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c 6 minutes ago Running kindnet-cni 0 4f6e7ea40b1b4 d411d70ae4d28 2114245ec4d6b 6 minutes ago Running kube-proxy 0 8ff7ab628ae6c 30bfd023cee4b 62930710c9634 6 minutes ago Running kube-apiserver 0 419ab81f59e8d d3125748aff71 aceacb6244f9f 6 minutes ago Running kube-scheduler 0 3a2ef27de0509 402525f4b6a6b 25444908517a5 6 minutes ago Running kube-controller-manager 0 78fc95e5b159d 026fb6380dcde 25f8c7f3da61c 6 minutes ago Running etcd 0 1bf7e091ed075 * * ==> coredns [f2ab40995bb2] <== * .:53 [INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031 CoreDNS-1.8.6 linux/amd64, go1.17.1, 13a9191 * * ==> describe nodes <== * Name: kindnet-20220221084934-6550 Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=kindnet-20220221084934-6550 kubernetes.io/os=linux minikube.k8s.io/commit=9a4c39b14752448ee8b5da7f8bb397c2a16d9ea9 minikube.k8s.io/name=kindnet-20220221084934-6550 minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2022_02_21T09_03_11_0700 minikube.k8s.io/version=v1.25.1 node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 21 Feb 2022 09:03:07 +0000 Taints: Unschedulable: false Lease: HolderIdentity: kindnet-20220221084934-6550 AcquireTime: RenewTime: Mon, 21 Feb 2022 09:09:39 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Mon, 21 Feb 2022 09:09:18 +0000 Mon, 21 Feb 2022 09:03:04 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Mon, 21 Feb 2022 09:09:18 +0000 Mon, 21 Feb 2022 09:03:04 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Mon, 21 Feb 2022 09:09:18 +0000 Mon, 21 Feb 2022 09:03:04 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Mon, 21 Feb 2022 09:09:18 +0000 Mon, 21 Feb 2022 09:03:31 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.49.2 Hostname: kindnet-20220221084934-6550 Capacity: cpu: 8 ephemeral-storage: 304695084Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32874648Ki pods: 110 Allocatable: cpu: 8 ephemeral-storage: 304695084Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32874648Ki pods: 110 System Info: Machine ID: b6a262faae404a5db719705fd34b5c8b System UUID: 2787a248-8102-41be-94ef-882a836b4e46 Boot ID: 36f9c729-2a96-4807-bb74-314dc2113999 Kernel Version: 5.11.0-1029-gcp OS Image: Ubuntu 20.04.2 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.12 Kubelet Version: v1.23.4 Kube-Proxy Version: v1.23.4 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (9 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- default netcat-668db85669-lcmt9 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m4s kube-system coredns-64897985d-svjnh 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 6m21s kube-system etcd-kindnet-20220221084934-6550 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 6m33s kube-system kindnet-b7vpv 100m (1%!)(MISSING) 100m (1%!)(MISSING) 50Mi (0%!)(MISSING) 50Mi (0%!)(MISSING) 6m21s kube-system kube-apiserver-kindnet-20220221084934-6550 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m33s kube-system kube-controller-manager-kindnet-20220221084934-6550 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m33s kube-system kube-proxy-hvpn5 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m21s kube-system kube-scheduler-kindnet-20220221084934-6550 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m33s kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m20s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 850m (10%!)(MISSING) 100m (1%!)(MISSING) memory 220Mi (0%!)(MISSING) 220Mi (0%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 6m21s kube-proxy Normal Starting 6m34s kubelet Starting kubelet. Normal NodeHasSufficientMemory 6m34s kubelet Node kindnet-20220221084934-6550 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 6m34s kubelet Node kindnet-20220221084934-6550 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 6m34s kubelet Node kindnet-20220221084934-6550 status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 6m33s kubelet Updated Node Allocatable limit across pods Normal NodeReady 6m13s kubelet Node kindnet-20220221084934-6550 status is now: NodeReady * * ==> dmesg <== * [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +0.807956] IPv4: martian source 10.244.0.223 from 10.244.0.3, on dev br-5d96ab4d6b1a [ +0.000006] ll header: 00000000: 02 42 58 0b cb 43 02 42 c0 a8 31 02 08 00 [ +0.215904] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +1.019944] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +0.500012] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 [ +1.003841] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 [ +1.023942] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 [ +0.427998] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +0.807964] IPv4: martian source 10.244.0.223 from 10.244.0.3, on dev br-5d96ab4d6b1a [ +0.000009] ll header: 00000000: 02 42 58 0b cb 43 02 42 c0 a8 31 02 08 00 [ +0.203925] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +1.027893] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +3.491828] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000008] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 [ +1.015843] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 * * ==> etcd [026fb6380dcd] <== * {"level":"info","ts":"2022-02-21T09:03:04.712Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:kindnet-20220221084934-6550 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"} {"level":"info","ts":"2022-02-21T09:03:04.712Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T09:03:04.712Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2022-02-21T09:03:04.712Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"} {"level":"info","ts":"2022-02-21T09:03:04.712Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"} {"level":"info","ts":"2022-02-21T09:03:04.713Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T09:03:04.713Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T09:03:04.714Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"} {"level":"info","ts":"2022-02-21T09:03:04.714Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2022-02-21T09:03:04.716Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"} {"level":"info","ts":"2022-02-21T09:03:04.716Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"} {"level":"warn","ts":"2022-02-21T09:03:34.975Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"225.435233ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-02-21T09:03:34.975Z","caller":"traceutil/trace.go:171","msg":"trace[1189977891] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:497; }","duration":"225.575766ms","start":"2022-02-21T09:03:34.750Z","end":"2022-02-21T09:03:34.975Z","steps":["trace[1189977891] 'range keys from in-memory index tree' (duration: 225.337024ms)"],"step_count":1} {"level":"warn","ts":"2022-02-21T09:03:34.975Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"319.726472ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:1 size:4667"} {"level":"info","ts":"2022-02-21T09:03:34.975Z","caller":"traceutil/trace.go:171","msg":"trace[1649275344] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:1; response_revision:497; }","duration":"319.918366ms","start":"2022-02-21T09:03:34.655Z","end":"2022-02-21T09:03:34.975Z","steps":["trace[1649275344] 'range keys from in-memory index tree' (duration: 319.584017ms)"],"step_count":1} {"level":"warn","ts":"2022-02-21T09:03:34.975Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-02-21T09:03:34.655Z","time spent":"319.994394ms","remote":"127.0.0.1:33264","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":4691,"request content":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" "} {"level":"warn","ts":"2022-02-21T09:03:56.870Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"120.450404ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-02-21T09:03:56.870Z","caller":"traceutil/trace.go:171","msg":"trace[1212159472] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:540; }","duration":"120.560675ms","start":"2022-02-21T09:03:56.750Z","end":"2022-02-21T09:03:56.870Z","steps":["trace[1212159472] 'agreement among raft nodes before linearized reading' (duration: 28.876562ms)","trace[1212159472] 'range keys from in-memory index tree' (duration: 91.568692ms)"],"step_count":2} {"level":"info","ts":"2022-02-21T09:03:58.855Z","caller":"traceutil/trace.go:171","msg":"trace[883194937] linearizableReadLoop","detail":"{readStateIndex:562; appliedIndex:562; }","duration":"105.270775ms","start":"2022-02-21T09:03:58.750Z","end":"2022-02-21T09:03:58.855Z","steps":["trace[883194937] 'read index received' (duration: 105.259046ms)","trace[883194937] 'applied index is now lower than readState.Index' (duration: 10.307µs)"],"step_count":2} {"level":"warn","ts":"2022-02-21T09:03:58.957Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"206.591649ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-02-21T09:03:58.957Z","caller":"traceutil/trace.go:171","msg":"trace[1031669119] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:541; }","duration":"206.688911ms","start":"2022-02-21T09:03:58.750Z","end":"2022-02-21T09:03:58.957Z","steps":["trace[1031669119] 'agreement among raft nodes before linearized reading' (duration: 105.388231ms)","trace[1031669119] 'range keys from in-memory index tree' (duration: 101.17504ms)"],"step_count":2} {"level":"info","ts":"2022-02-21T09:03:59.382Z","caller":"traceutil/trace.go:171","msg":"trace[1952457421] transaction","detail":"{read_only:false; response_revision:542; number_of_response:1; }","duration":"160.781985ms","start":"2022-02-21T09:03:59.221Z","end":"2022-02-21T09:03:59.382Z","steps":["trace[1952457421] 'process raft request' (duration: 63.877871ms)","trace[1952457421] 'compare' (duration: 96.783784ms)"],"step_count":2} {"level":"info","ts":"2022-02-21T09:04:00.989Z","caller":"traceutil/trace.go:171","msg":"trace[426299990] linearizableReadLoop","detail":"{readStateIndex:565; appliedIndex:565; }","duration":"239.672823ms","start":"2022-02-21T09:04:00.749Z","end":"2022-02-21T09:04:00.989Z","steps":["trace[426299990] 'read index received' (duration: 239.664183ms)","trace[426299990] 'applied index is now lower than readState.Index' (duration: 7.391µs)"],"step_count":2} {"level":"warn","ts":"2022-02-21T09:04:00.991Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"242.264699ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-02-21T09:04:00.991Z","caller":"traceutil/trace.go:171","msg":"trace[1618152808] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:543; }","duration":"242.338947ms","start":"2022-02-21T09:04:00.749Z","end":"2022-02-21T09:04:00.991Z","steps":["trace[1618152808] 'agreement among raft nodes before linearized reading' (duration: 239.821572ms)"],"step_count":1} * * ==> kernel <== * 09:09:45 up 52 min, 0 users, load average: 1.17, 2.17, 2.85 Linux kindnet-20220221084934-6550 5.11.0-1029-gcp #33~20.04.3-Ubuntu SMP Tue Jan 18 12:03:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.2 LTS" * * ==> kube-apiserver [30bfd023cee4] <== * I0221 09:03:07.726604 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0221 09:03:07.726640 1 cache.go:39] Caches are synced for AvailableConditionController controller I0221 09:03:07.733603 1 apf_controller.go:322] Running API Priority and Fairness config worker I0221 09:03:07.749557 1 shared_informer.go:247] Caches are synced for node_authorizer I0221 09:03:08.625861 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0221 09:03:08.632794 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0221 09:03:08.634094 1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000 I0221 09:03:08.637287 1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000 I0221 09:03:08.637308 1 storage_scheduling.go:109] all system priority classes are created successfully or already exist. I0221 09:03:09.089112 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0221 09:03:09.137979 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0221 09:03:09.225516 1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1] W0221 09:03:09.231067 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2] I0221 09:03:09.232625 1 controller.go:611] quota admission added evaluator for: endpoints I0221 09:03:09.237210 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io I0221 09:03:09.767749 1 controller.go:611] quota admission added evaluator for: serviceaccounts I0221 09:03:10.576047 1 controller.go:611] quota admission added evaluator for: deployments.apps I0221 09:03:10.584875 1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10] I0221 09:03:10.595885 1 controller.go:611] quota admission added evaluator for: daemonsets.apps I0221 09:03:10.815301 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io I0221 09:03:23.072677 1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps I0221 09:03:23.523367 1 controller.go:611] quota admission added evaluator for: replicasets.apps I0221 09:03:23.974139 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io I0221 09:03:40.627749 1 alloc.go:329] "allocated clusterIPs" service="default/netcat" clusterIPs=map[IPv4:10.108.109.54] E0221 09:07:44.715564 1 upgradeaware.go:409] Error proxying data from client to backend: write tcp 192.168.49.2:46294->192.168.49.2:10250: write: broken pipe * * ==> kube-controller-manager [402525f4b6a6] <== * I0221 09:03:22.619439 1 shared_informer.go:247] Caches are synced for attach detach I0221 09:03:22.619472 1 shared_informer.go:247] Caches are synced for TTL I0221 09:03:22.620573 1 shared_informer.go:247] Caches are synced for persistent volume I0221 09:03:22.621752 1 shared_informer.go:247] Caches are synced for TTL after finished I0221 09:03:22.774867 1 shared_informer.go:247] Caches are synced for deployment I0221 09:03:22.796439 1 shared_informer.go:247] Caches are synced for resource quota I0221 09:03:22.807569 1 shared_informer.go:247] Caches are synced for disruption I0221 09:03:22.807598 1 disruption.go:371] Sending events to api server. I0221 09:03:22.819774 1 shared_informer.go:247] Caches are synced for ReplicaSet I0221 09:03:22.822180 1 shared_informer.go:247] Caches are synced for resource quota I0221 09:03:23.078397 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-hvpn5" I0221 09:03:23.080718 1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-b7vpv" I0221 09:03:23.241044 1 shared_informer.go:247] Caches are synced for garbage collector I0221 09:03:23.268409 1 shared_informer.go:247] Caches are synced for garbage collector I0221 09:03:23.268431 1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0221 09:03:23.525737 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2" I0221 09:03:23.625809 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-t6244" I0221 09:03:23.631868 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-svjnh" I0221 09:03:23.808344 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1" I0221 09:03:23.819880 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-t6244" I0221 09:03:32.547490 1 node_lifecycle_controller.go:1190] Controller detected that some Nodes are Ready. Exiting master disruption mode. I0221 09:03:40.621039 1 event.go:294] "Event occurred" object="default/netcat" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set netcat-668db85669 to 1" I0221 09:03:40.634471 1 event.go:294] "Event occurred" object="default/netcat-668db85669" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: netcat-668db85669-lcmt9" W0221 09:03:40.638491 1 endpointslice_controller.go:306] Error syncing endpoint slices for service "default/netcat", retrying. Error: EndpointSlice informer cache is out of date I0221 09:03:40.642458 1 event.go:294] "Event occurred" object="netcat" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service default/netcat: endpoints \"netcat\" already exists" * * ==> kube-proxy [d411d70ae4d2] <== * I0221 09:03:23.945814 1 node.go:163] Successfully retrieved node IP: 192.168.49.2 I0221 09:03:23.945883 1 server_others.go:138] "Detected node IP" address="192.168.49.2" I0221 09:03:23.945928 1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode="" I0221 09:03:23.970776 1 server_others.go:206] "Using iptables Proxier" I0221 09:03:23.970823 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4 I0221 09:03:23.970834 1 server_others.go:214] "Creating dualStackProxier for iptables" I0221 09:03:23.970852 1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6" I0221 09:03:23.971373 1 server.go:656] "Version info" version="v1.23.4" I0221 09:03:23.972242 1 config.go:317] "Starting service config controller" I0221 09:03:23.972263 1 shared_informer.go:240] Waiting for caches to sync for service config I0221 09:03:23.972287 1 config.go:226] "Starting endpoint slice config controller" I0221 09:03:23.972291 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I0221 09:03:24.072973 1 shared_informer.go:247] Caches are synced for endpoint slice config I0221 09:03:24.072984 1 shared_informer.go:247] Caches are synced for service config * * ==> kube-scheduler [d3125748aff7] <== * W0221 09:03:07.723837 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0221 09:03:07.723865 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0221 09:03:07.723891 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0221 09:03:07.723910 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0221 09:03:08.546968 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0221 09:03:08.547050 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0221 09:03:08.586371 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0221 09:03:08.586462 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0221 09:03:08.673241 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0221 09:03:08.673286 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope W0221 09:03:08.676239 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0221 09:03:08.676274 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0221 09:03:08.704454 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0221 09:03:08.704501 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0221 09:03:08.704604 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0221 09:03:08.704639 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0221 09:03:08.750624 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0221 09:03:08.750661 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0221 09:03:08.807066 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0221 09:03:08.807104 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0221 09:03:08.903652 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0221 09:03:08.903685 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0221 09:03:08.958559 1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0221 09:03:08.958591 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" I0221 09:03:11.217744 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Mon 2022-02-21 09:02:54 UTC, end at Mon 2022-02-21 09:09:45 UTC. -- Feb 21 09:03:22 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:22.571383 1938 kuberuntime_manager.go:1098] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24" Feb 21 09:03:22 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:22.571833 1938 docker_service.go:364] "Docker cri received runtime config" runtimeConfig="&RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}" Feb 21 09:03:22 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:22.571998 1938 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24" Feb 21 09:03:22 kindnet-20220221084934-6550 kubelet[1938]: E0221 09:03:22.580345 1938 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" Feb 21 09:03:23 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:23.084009 1938 topology_manager.go:200] "Topology Admit Handler" Feb 21 09:03:23 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:23.086355 1938 topology_manager.go:200] "Topology Admit Handler" Feb 21 09:03:23 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:23.202185 1938 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/70703c09-41bc-4c02-9ccf-df45333fbc70-cni-cfg\") pod \"kindnet-b7vpv\" (UID: \"70703c09-41bc-4c02-9ccf-df45333fbc70\") " pod="kube-system/kindnet-b7vpv" Feb 21 09:03:23 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:23.202259 1938 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eac36e6a-fd59-49e4-a536-c2aa610984ef-lib-modules\") pod \"kube-proxy-hvpn5\" (UID: \"eac36e6a-fd59-49e4-a536-c2aa610984ef\") " pod="kube-system/kube-proxy-hvpn5" Feb 21 09:03:23 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:23.202293 1938 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70703c09-41bc-4c02-9ccf-df45333fbc70-xtables-lock\") pod \"kindnet-b7vpv\" (UID: \"70703c09-41bc-4c02-9ccf-df45333fbc70\") " pod="kube-system/kindnet-b7vpv" Feb 21 09:03:23 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:23.202387 1938 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlmwp\" (UniqueName: \"kubernetes.io/projected/70703c09-41bc-4c02-9ccf-df45333fbc70-kube-api-access-nlmwp\") pod \"kindnet-b7vpv\" (UID: \"70703c09-41bc-4c02-9ccf-df45333fbc70\") " pod="kube-system/kindnet-b7vpv" Feb 21 09:03:23 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:23.202470 1938 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70703c09-41bc-4c02-9ccf-df45333fbc70-lib-modules\") pod \"kindnet-b7vpv\" (UID: \"70703c09-41bc-4c02-9ccf-df45333fbc70\") " pod="kube-system/kindnet-b7vpv" Feb 21 09:03:23 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:23.202595 1938 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/eac36e6a-fd59-49e4-a536-c2aa610984ef-kube-proxy\") pod \"kube-proxy-hvpn5\" (UID: \"eac36e6a-fd59-49e4-a536-c2aa610984ef\") " pod="kube-system/kube-proxy-hvpn5" Feb 21 09:03:23 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:23.202647 1938 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eac36e6a-fd59-49e4-a536-c2aa610984ef-xtables-lock\") pod \"kube-proxy-hvpn5\" (UID: \"eac36e6a-fd59-49e4-a536-c2aa610984ef\") " pod="kube-system/kube-proxy-hvpn5" Feb 21 09:03:23 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:23.202684 1938 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ncqr\" (UniqueName: \"kubernetes.io/projected/eac36e6a-fd59-49e4-a536-c2aa610984ef-kube-api-access-8ncqr\") pod \"kube-proxy-hvpn5\" (UID: \"eac36e6a-fd59-49e4-a536-c2aa610984ef\") " pod="kube-system/kube-proxy-hvpn5" Feb 21 09:03:25 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:25.721597 1938 cni.go:240] "Unable to update cni config" err="no networks found in /etc/cni/net.mk" Feb 21 09:03:26 kindnet-20220221084934-6550 kubelet[1938]: E0221 09:03:26.242713 1938 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" Feb 21 09:03:31 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:31.791100 1938 topology_manager.go:200] "Topology Admit Handler" Feb 21 09:03:31 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:31.791382 1938 topology_manager.go:200] "Topology Admit Handler" Feb 21 09:03:31 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:31.959217 1938 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf4kv\" (UniqueName: \"kubernetes.io/projected/84ae4f8f-baa9-4b02-a1f6-5d9026e71769-kube-api-access-nf4kv\") pod \"storage-provisioner\" (UID: \"84ae4f8f-baa9-4b02-a1f6-5d9026e71769\") " pod="kube-system/storage-provisioner" Feb 21 09:03:31 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:31.959297 1938 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd666a7b-1888-4f96-8615-0a625ca7c35a-config-volume\") pod \"coredns-64897985d-svjnh\" (UID: \"cd666a7b-1888-4f96-8615-0a625ca7c35a\") " pod="kube-system/coredns-64897985d-svjnh" Feb 21 09:03:31 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:31.959339 1938 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/84ae4f8f-baa9-4b02-a1f6-5d9026e71769-tmp\") pod \"storage-provisioner\" (UID: \"84ae4f8f-baa9-4b02-a1f6-5d9026e71769\") " pod="kube-system/storage-provisioner" Feb 21 09:03:31 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:31.959375 1938 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnssc\" (UniqueName: \"kubernetes.io/projected/cd666a7b-1888-4f96-8615-0a625ca7c35a-kube-api-access-wnssc\") pod \"coredns-64897985d-svjnh\" (UID: \"cd666a7b-1888-4f96-8615-0a625ca7c35a\") " pod="kube-system/coredns-64897985d-svjnh" Feb 21 09:03:40 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:40.639552 1938 topology_manager.go:200] "Topology Admit Handler" Feb 21 09:03:40 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:40.807948 1938 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxtsc\" (UniqueName: \"kubernetes.io/projected/0fd0efca-25d3-42b8-b210-f9f1dd5821bd-kube-api-access-dxtsc\") pod \"netcat-668db85669-lcmt9\" (UID: \"0fd0efca-25d3-42b8-b210-f9f1dd5821bd\") " pod="default/netcat-668db85669-lcmt9" Feb 21 09:03:41 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:41.264386 1938 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="c3ef6cecbc94f88f4f4ba2852ddd55bb38a48d6eba24c50cc663a7059acb1abb" * * ==> storage-provisioner [4a4b744690f2] <== * I0221 09:03:32.483323 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... I0221 09:03:32.511147 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service! I0221 09:03:32.511218 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I0221 09:03:32.532448 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath I0221 09:03:32.532609 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b1caf4c3-f6ca-4315-b5f2-ad23ee3af26a", APIVersion:"v1", ResourceVersion:"483", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kindnet-20220221084934-6550_b8beaf6e-41e9-47e8-8fb7-ee09cb02d620 became leader I0221 09:03:32.532621 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kindnet-20220221084934-6550_b8beaf6e-41e9-47e8-8fb7-ee09cb02d620! I0221 09:03:32.632930 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kindnet-20220221084934-6550_b8beaf6e-41e9-47e8-8fb7-ee09cb02d620! -- /stdout -- helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p kindnet-20220221084934-6550 -n kindnet-20220221084934-6550 helpers_test.go:262: (dbg) Run: kubectl --context kindnet-20220221084934-6550 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running helpers_test.go:271: non-running pods: helpers_test.go:273: ======> post-mortem[TestNetworkPlugins/group/kindnet]: describe non-running pods <====== helpers_test.go:276: (dbg) Run: kubectl --context kindnet-20220221084934-6550 describe pod helpers_test.go:276: (dbg) Non-zero exit: kubectl --context kindnet-20220221084934-6550 describe pod : exit status 1 (38.839765ms) ** stderr ** error: resource name may not be empty ** /stderr ** helpers_test.go:278: kubectl --context kindnet-20220221084934-6550 describe pod : exit status 1 helpers_test.go:176: Cleaning up "kindnet-20220221084934-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p kindnet-20220221084934-6550 === CONT TestNetworkPlugins/group/enable-default-cni/DNS net_test.go:163: (dbg) Run: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default === CONT TestNetworkPlugins/group/kindnet helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kindnet-20220221084934-6550: (2.714574033s) === CONT TestStartStop/group/old-k8s-version === RUN TestStartStop/group/old-k8s-version/serial === RUN TestStartStop/group/old-k8s-version/serial/FirstStart start_stop_delete_test.go:171: (dbg) Run: out/minikube-linux-amd64 start -p old-k8s-version-20220221090948-6550 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=docker --kubernetes-version=v1.16.0 === CONT TestNetworkPlugins/group/bridge/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.130719111s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** === CONT TestNetworkPlugins/group/enable-default-cni/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.189892235s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** === CONT TestNetworkPlugins/group/bridge/DNS net_test.go:163: (dbg) Run: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default === CONT TestNetworkPlugins/group/enable-default-cni/DNS net_test.go:163: (dbg) Run: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default E0221 09:10:10.799921 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory === CONT TestNetworkPlugins/group/bridge/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.15894252s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default === CONT TestNetworkPlugins/group/enable-default-cni/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.143725941s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 09:10:29.028950 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory E0221 09:10:33.614245 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory net_test.go:163: (dbg) Run: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default === CONT TestNetworkPlugins/group/bridge/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.127007778s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default === CONT TestNetworkPlugins/group/enable-default-cni/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.128824565s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default === CONT TestNetworkPlugins/group/bridge/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.13905095s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** === CONT TestNetworkPlugins/group/enable-default-cni/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.1416818s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** === CONT TestNetworkPlugins/group/bridge/DNS net_test.go:163: (dbg) Run: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default E0221 09:11:16.369953 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory === CONT TestNetworkPlugins/group/enable-default-cni/DNS net_test.go:163: (dbg) Run: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default === CONT TestNetworkPlugins/group/bridge/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.132482174s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** === CONT TestNetworkPlugins/group/enable-default-cni/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.133379348s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 09:11:44.054101 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory === CONT TestNetworkPlugins/group/bridge/DNS net_test.go:163: (dbg) Run: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default E0221 09:11:46.065538 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory E0221 09:11:46.071578 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory E0221 09:11:46.082474 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory E0221 09:11:46.103250 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory E0221 09:11:46.144057 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory E0221 09:11:46.225233 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory E0221 09:11:46.386034 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory E0221 09:11:46.706601 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory E0221 09:11:47.347094 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory E0221 09:11:48.628104 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory E0221 09:11:51.188585 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory E0221 09:11:56.308747 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory === CONT TestStartStop/group/old-k8s-version/serial/FirstStart start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20220221090948-6550 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=docker --kubernetes-version=v1.16.0: (2m9.314051164s) === RUN TestStartStop/group/old-k8s-version/serial/DeployApp start_stop_delete_test.go:181: (dbg) Run: kubectl --context old-k8s-version-20220221090948-6550 create -f testdata/busybox.yaml start_stop_delete_test.go:181: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ... helpers_test.go:343: "busybox" [11b2bdf2-6442-4808-bb1b-2dc867613d07] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox]) === CONT TestNetworkPlugins/group/bridge/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.138079671s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** === CONT TestStartStop/group/old-k8s-version/serial/DeployApp helpers_test.go:343: "busybox" [11b2bdf2-6442-4808-bb1b-2dc867613d07] Running === CONT TestNetworkPlugins/group/enable-default-cni/DNS net_test.go:163: (dbg) Run: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default === CONT TestStartStop/group/old-k8s-version/serial/DeployApp start_stop_delete_test.go:181: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.011380433s start_stop_delete_test.go:181: (dbg) Run: kubectl --context old-k8s-version-20220221090948-6550 exec busybox -- /bin/sh -c "ulimit -n" === RUN TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive start_stop_delete_test.go:190: (dbg) Run: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20220221090948-6550 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain E0221 09:12:06.548985 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory start_stop_delete_test.go:200: (dbg) Run: kubectl --context old-k8s-version-20220221090948-6550 describe deploy/metrics-server -n kube-system === RUN TestStartStop/group/old-k8s-version/serial/Stop start_stop_delete_test.go:213: (dbg) Run: out/minikube-linux-amd64 stop -p old-k8s-version-20220221090948-6550 --alsologtostderr -v=3 === CONT TestNetworkPlugins/group/bridge/DNS net_test.go:163: (dbg) Run: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default === CONT TestNetworkPlugins/group/kubenet/Start net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-20220221084933-6550 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker --container-runtime=docker: (4m50.281757661s) === RUN TestNetworkPlugins/group/kubenet/KubeletFlags net_test.go:120: (dbg) Run: out/minikube-linux-amd64 ssh -p kubenet-20220221084933-6550 "pgrep -a kubelet" === RUN TestNetworkPlugins/group/kubenet/NetCatPod net_test.go:132: (dbg) Run: kubectl --context kubenet-20220221084933-6550 replace --force -f testdata/netcat-deployment.yaml net_test.go:146: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ... helpers_test.go:343: "netcat-668db85669-4md9w" [8cc65eb5-ef82-4281-90a6-859ab9f89010] Pending helpers_test.go:343: "netcat-668db85669-4md9w" [8cc65eb5-ef82-4281-90a6-859ab9f89010] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils]) === CONT TestStartStop/group/old-k8s-version/serial/Stop start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20220221090948-6550 --alsologtostderr -v=3: (10.969291333s) === RUN TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop start_stop_delete_test.go:224: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220221090948-6550 -n old-k8s-version-20220221090948-6550 start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220221090948-6550 -n old-k8s-version-20220221090948-6550: exit status 7 (102.4671ms) -- stdout -- Stopped -- /stdout -- start_stop_delete_test.go:224: status error: exit status 7 (may be ok) start_stop_delete_test.go:231: (dbg) Run: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20220221090948-6550 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 === RUN TestStartStop/group/old-k8s-version/serial/SecondStart start_stop_delete_test.go:241: (dbg) Run: out/minikube-linux-amd64 start -p old-k8s-version-20220221090948-6550 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=docker --kubernetes-version=v1.16.0 === CONT TestNetworkPlugins/group/enable-default-cni/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.144342485s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** === CONT TestNetworkPlugins/group/kubenet/NetCatPod helpers_test.go:343: "netcat-668db85669-4md9w" [8cc65eb5-ef82-4281-90a6-859ab9f89010] Running E0221 09:12:27.029704 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory net_test.go:146: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.006043367s === RUN TestNetworkPlugins/group/kubenet/DNS net_test.go:163: (dbg) Run: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default === CONT TestNetworkPlugins/group/bridge/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.125001889s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 09:12:30.568932 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory E0221 09:12:36.193616 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory === CONT TestNetworkPlugins/group/kubenet/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.148930134s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default === CONT TestNetworkPlugins/group/enable-default-cni/DNS net_test.go:163: (dbg) Run: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default === CONT TestNetworkPlugins/group/kubenet/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.141584405s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default === CONT TestNetworkPlugins/group/enable-default-cni/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.122491787s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 09:13:07.990646 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory === CONT TestNetworkPlugins/group/kubenet/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.12948673s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default === CONT TestNetworkPlugins/group/bridge/DNS net_test.go:163: (dbg) Run: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default E0221 09:13:29.174402 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory === CONT TestNetworkPlugins/group/kubenet/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.132475766s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** === CONT TestNetworkPlugins/group/bridge/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.143988318s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1 net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"* === CONT TestNetworkPlugins/group/bridge net_test.go:198: "bridge" test finished in 24m0.75141613s, failed=true net_test.go:199: *** TestNetworkPlugins/group/bridge FAILED at 2022-02-21 09:13:34.512960034 +0000 UTC m=+2907.275279626 helpers_test.go:223: -----------------------post-mortem-------------------------------- helpers_test.go:231: ======> post-mortem[TestNetworkPlugins/group/bridge]: docker inspect <====== helpers_test.go:232: (dbg) Run: docker inspect bridge-20220221084933-6550 helpers_test.go:236: (dbg) docker inspect bridge-20220221084933-6550: -- stdout -- [ { "Id": "92f2512247c48e2a4f4c4fa198db26fe6a0ededdc8b3b0cdcddc74eb7d584f79", "Created": "2022-02-21T09:04:01.183512299Z", "Path": "/usr/local/bin/entrypoint", "Args": [ "/sbin/init" ], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 452177, "ExitCode": 0, "Error": "", "StartedAt": "2022-02-21T09:04:01.608435405Z", "FinishedAt": "0001-01-01T00:00:00Z" }, "Image": "sha256:1312ccd2422d964b2df363d606d0c016d6acbc1ddf0211c26a74717f2897dc43", "ResolvConfPath": "/var/lib/docker/containers/92f2512247c48e2a4f4c4fa198db26fe6a0ededdc8b3b0cdcddc74eb7d584f79/resolv.conf", "HostnamePath": "/var/lib/docker/containers/92f2512247c48e2a4f4c4fa198db26fe6a0ededdc8b3b0cdcddc74eb7d584f79/hostname", "HostsPath": "/var/lib/docker/containers/92f2512247c48e2a4f4c4fa198db26fe6a0ededdc8b3b0cdcddc74eb7d584f79/hosts", "LogPath": "/var/lib/docker/containers/92f2512247c48e2a4f4c4fa198db26fe6a0ededdc8b3b0cdcddc74eb7d584f79/92f2512247c48e2a4f4c4fa198db26fe6a0ededdc8b3b0cdcddc74eb7d584f79-json.log", "Name": "/bridge-20220221084933-6550", "RestartCount": 0, "Driver": "overlay2", "Platform": "linux", "MountLabel": "", "ProcessLabel": "", "AppArmorProfile": "unconfined", "ExecIDs": null, "HostConfig": { "Binds": [ "/lib/modules:/lib/modules:ro", "bridge-20220221084933-6550:/var" ], "ContainerIDFile": "", "LogConfig": { "Type": "json-file", "Config": {} }, "NetworkMode": "bridge-20220221084933-6550", "PortBindings": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ] }, "RestartPolicy": { "Name": "no", "MaximumRetryCount": 0 }, "AutoRemove": false, "VolumeDriver": "", "VolumesFrom": null, "CapAdd": null, "CapDrop": null, "CgroupnsMode": "host", "Dns": [], "DnsOptions": [], "DnsSearch": [], "ExtraHosts": null, "GroupAdd": null, "IpcMode": "private", "Cgroup": "", "Links": null, "OomScoreAdj": 0, "PidMode": "", "Privileged": true, "PublishAllPorts": false, "ReadonlyRootfs": false, "SecurityOpt": [ "seccomp=unconfined", "apparmor=unconfined", "label=disable" ], "Tmpfs": { "/run": "", "/tmp": "" }, "UTSMode": "", "UsernsMode": "", "ShmSize": 67108864, "Runtime": "runc", "ConsoleSize": [ 0, 0 ], "Isolation": "", "CpuShares": 0, "Memory": 0, "NanoCpus": 2000000000, "CgroupParent": "", "BlkioWeight": 0, "BlkioWeightDevice": [], "BlkioDeviceReadBps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteIOps": null, "CpuPeriod": 0, "CpuQuota": 0, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": [ { "PathOnHost": "/dev/fuse", "PathInContainer": "/dev/fuse", "CgroupPermissions": "rwm" } ], "DeviceCgroupRules": null, "DeviceRequests": null, "KernelMemory": 0, "KernelMemoryTCP": 0, "MemoryReservation": 0, "MemorySwap": 0, "MemorySwappiness": null, "OomKillDisable": false, "PidsLimit": null, "Ulimits": null, "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0, "MaskedPaths": null, "ReadonlyPaths": null }, "GraphDriver": { "Data": { "LowerDir": "/var/lib/docker/overlay2/ed5a45fcc74e2dd89241db3e86709ba8d8411989364257cc31812097e249070a-init/diff:/var/lib/docker/overlay2/a149cc5f42de5038ba3e5f21bd82a6356aa9456aec11a3690e569db72ca8eaf5/diff:/var/lib/docker/overlay2/fce4ef34d92ff253276822af091ec16578b924486e0e565c7f34b479bdbe1606/diff:/var/lib/docker/overlay2/a38e10ed288ad73a8ce637c6219d5b13318bb72210ce4fb6291dd57d238df611/diff:/var/lib/docker/overlay2/93c18ddce49a4eb2d344b0579d7f58df999d53bb329042f78a5b3f40296f83cd/diff:/var/lib/docker/overlay2/d9ee14262b31fff87924cf913d835b02182e6a8dc3783ecb5f67dbd29faa079c/diff:/var/lib/docker/overlay2/8a0b39fc5ebcb9761ad433102244050fab339d3e027a5a30a7e01d121fe362e4/diff:/var/lib/docker/overlay2/7a6b41deef37b6400fac0c94871f73040d4cae157de2e929b7a0bf96f0e353aa/diff:/var/lib/docker/overlay2/49d913ee587718efcf521a65de0109df5c2c42644f75690beed97a437d623b66/diff:/var/lib/docker/overlay2/89e8fa960313dfb6ece4007576bf5d89e2b0ea68a2bde9de5527647a7df6ff0f/diff:/var/lib/docker/overlay2/0d2632e6ee2f4198f3c7c0d37c31ee03d33864e0131ae85dd3001adecfae53db/diff:/var/lib/docker/overlay2/b28a3b2bbe2e231d00863aaaf6be28d2e5a76197f2be7934f18720a71f2435a8/diff:/var/lib/docker/overlay2/6f88e83ff5bb26bf909248d8df5eacf4d01f72523927fd7433a925c525fbd274/diff:/var/lib/docker/overlay2/c2547d18d2639fa842b0977952fa69cffd7b076e8db357ef0ec62fae9676619b/diff:/var/lib/docker/overlay2/34465ba37bb8eef87906c38cb891a738c54a16dbf47d27fef95cee96f0b62c4f/diff:/var/lib/docker/overlay2/50d3383518bb8e6a01e1cfcae621b8c2238dc54ae94d63606d145633045bc6cd/diff:/var/lib/docker/overlay2/83561f107aed2bf9cb840ff66aceb882d0e39bcced79ef27ce67c17e063ba536/diff:/var/lib/docker/overlay2/bfb7c0c91e0a7649286abe3a6b02df4585a931ffa3a6b1e4846aa1b98eb0d997/diff:/var/lib/docker/overlay2/80589044da151f3d3eed7824d236bc8a19697182abca8610eb34f1c3cae53bf0/diff:/var/lib/docker/overlay2/5a336e956e467ad50830aa4b9ed5b65cb3ef08e6d49a4900acd0440ca3c03c3a/diff:/var/lib/docker/overlay2/f7b3446c0dca80a2053183e360de8f99e3a31e87ad633fcc23f14b2045643596/diff:/var/lib/docker/overlay2/03688a16272f13f172ed8baa181df2a1702e08edbc9c5a4a767265adb2c00b79/diff:/var/lib/docker/overlay2/c5445afaabab58221dbffedde261b3f9948116ccba1344b6f3dd4fdfaf5c98cd/diff:/var/lib/docker/overlay2/704464fd20f23f93d0ad3c3c58103f01fc8a0d76d70bc191f718273e7412bb42/diff:/var/lib/docker/overlay2/a344f9ca1bf1b674ebf560c2a0498913ed90c836e3b4f5b1968022f7f7e54055/diff:/var/lib/docker/overlay2/929f053e6a327a7203e8d8c281f827d02ad922fdabbdfcc5453003daba88cdf4/diff:/var/lib/docker/overlay2/d6a160f0ffe94e5abf99d928a638f6a6220b24c7ba643a438c7557146d240f70/diff:/var/lib/docker/overlay2/c6b58e187a2df10fd55e19e6ad38c20f5ebf5e62617a59b6ce4afe0fc712c0f6/diff:/var/lib/docker/overlay2/fef43deeff878af42cc1f31a1f265b66e642aa166cec02eba5a0c82fb6d70872/diff:/var/lib/docker/overlay2/0d7bbf19182056a05efca89485df54da5fba9903000517fc9e7c2072aa7ba62e/diff:/var/lib/docker/overlay2/5840748964e9ce3890a2fa4fc2e3fb0635426f944b42b3128808a7aca816bf48/diff:/var/lib/docker/overlay2/a94126962e9f346960e3f570b96e0a6e987d255fcba30bf3fd76b227e66275c6/diff:/var/lib/docker/overlay2/211ba88e1557a352838b4896dff8efac70330c1424cff237613c4186994ec037/diff:/var/lib/docker/overlay2/c04720d23411d6693832990e2c79bc529ade609401eaeee8d159f5a6d74d986e/diff:/var/lib/docker/overlay2/70bfe328c89d3a3902a4ba591874fe936425b208ec8e39c2e25d60f21b212300/diff:/var/lib/docker/overlay2/7dcce8bec6dd9561387116c9f724b9101e94e15c0e19c7880ce2b4e96b2921a6/diff:/var/lib/docker/overlay2/81463813805e91be42d9f71b5c63823a80abaaa42d8638034eaddcd66c148310/diff:/var/lib/docker/overlay2/c7dbb1344cf7167b07d58d49902e0db9335bc92a71a324c15dadc14c0aa67c45/diff:/var/lib/docker/overlay2/909df0d6b92b92bcf6e376f0e33d120c53cd24ad0b997d31b5a84f3d1ab89769/diff:/var/lib/docker/overlay2/eacfeaee6ed4aa14e847adad2c6d03439c53944654cc2779cc5c80d239ab323c/diff:/var/lib/docker/overlay2/48c58492169f9043495b11b47439f4dcdeb168525e5bb3a8c130c1ddd5ec882d/diff:/var/lib/docker/overlay2/4520a69f91e766b7694473c2107cc0644394d65da900259eb5933779accdc86d/diff:/var/lib/docker/overlay2/68aafdd300c4b4ebf94976a352959e75fa170653a69e13da38650a447cd5cb2f/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394cfc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff", "MergedDir": "/var/lib/docker/overlay2/ed5a45fcc74e2dd89241db3e86709ba8d8411989364257cc31812097e249070a/merged", "UpperDir": "/var/lib/docker/overlay2/ed5a45fcc74e2dd89241db3e86709ba8d8411989364257cc31812097e249070a/diff", "WorkDir": "/var/lib/docker/overlay2/ed5a45fcc74e2dd89241db3e86709ba8d8411989364257cc31812097e249070a/work" }, "Name": "overlay2" }, "Mounts": [ { "Type": "bind", "Source": "/lib/modules", "Destination": "/lib/modules", "Mode": "ro", "RW": false, "Propagation": "rprivate" }, { "Type": "volume", "Name": "bridge-20220221084933-6550", "Source": "/var/lib/docker/volumes/bridge-20220221084933-6550/_data", "Destination": "/var", "Driver": "local", "Mode": "z", "RW": true, "Propagation": "" } ], "Config": { "Hostname": "bridge-20220221084933-6550", "Domainname": "", "User": "root", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "ExposedPorts": { "22/tcp": {}, "2376/tcp": {}, "32443/tcp": {}, "5000/tcp": {}, "8443/tcp": {} }, "Tty": true, "OpenStdin": false, "StdinOnce": false, "Env": [ "container=docker", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" ], "Cmd": null, "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2", "Volumes": null, "WorkingDir": "", "Entrypoint": [ "/usr/local/bin/entrypoint", "/sbin/init" ], "OnBuild": null, "Labels": { "created_by.minikube.sigs.k8s.io": "true", "mode.minikube.sigs.k8s.io": "bridge-20220221084933-6550", "name.minikube.sigs.k8s.io": "bridge-20220221084933-6550", "role.minikube.sigs.k8s.io": "" }, "StopSignal": "SIGRTMIN+3" }, "NetworkSettings": { "Bridge": "", "SandboxID": "0f4bfabff1b3d095a573c55ed3b3202d1cf91495e39f99183c5a4ec4ee6861c4", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49394" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49393" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49390" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49392" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49391" } ] }, "SandboxKey": "/var/run/docker/netns/0f4bfabff1b3", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null, "EndpointID": "", "Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "MacAddress": "", "Networks": { "bridge-20220221084933-6550": { "IPAMConfig": { "IPv4Address": "192.168.67.2" }, "Links": null, "Aliases": [ "92f2512247c4", "bridge-20220221084933-6550" ], "NetworkID": "0c80bded97cfa73ce5c331c3eb3fb63b7ea93362767e43bd30c1be5861caa896", "EndpointID": "44fb9677e35dc60cc44ff4015b2a55e27655ad27e76bab29c355b60245b43a65", "Gateway": "192.168.67.1", "IPAddress": "192.168.67.2", "IPPrefixLen": 24, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:c0:a8:43:02", "DriverOpts": null } } } } ] -- /stdout -- helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p bridge-20220221084933-6550 -n bridge-20220221084933-6550 helpers_test.go:245: <<< TestNetworkPlugins/group/bridge FAILED: start of post-mortem logs <<< helpers_test.go:246: ======> post-mortem[TestNetworkPlugins/group/bridge]: minikube logs <====== helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 -p bridge-20220221084933-6550 logs -n 25 E0221 09:13:35.029707 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory E0221 09:13:35.035270 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory E0221 09:13:35.045514 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory E0221 09:13:35.066322 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory E0221 09:13:35.106616 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory E0221 09:13:35.186973 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory E0221 09:13:35.347422 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory E0221 09:13:35.668057 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p bridge-20220221084933-6550 logs -n 25: (1.118026474s) helpers_test.go:253: TestNetworkPlugins/group/bridge logs: -- stdout -- * * ==> Audit <== * |---------|---------------------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|---------------------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| | ssh | -p false-20220221084934-6550 | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:56:15 UTC | Mon, 21 Feb 2022 08:56:16 UTC | | | pgrep -a kubelet | | | | | | | start | -p auto-20220221084933-6550 | auto-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:29 UTC | Mon, 21 Feb 2022 09:01:45 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p auto-20220221084933-6550 | auto-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:01:45 UTC | Mon, 21 Feb 2022 09:01:45 UTC | | | pgrep -a kubelet | | | | | | | -p | false-20220221084934-6550 logs | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:02:41 UTC | Mon, 21 Feb 2022 09:02:42 UTC | | | -n 25 | | | | | | | delete | -p false-20220221084934-6550 | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:02:43 UTC | Mon, 21 Feb 2022 09:02:46 UTC | | -p | custom-weave-20220221084934-6550 | custom-weave-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:27 UTC | Mon, 21 Feb 2022 09:03:28 UTC | | | logs -n 25 | | | | | | | delete | -p | custom-weave-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:29 UTC | Mon, 21 Feb 2022 09:03:32 UTC | | | custom-weave-20220221084934-6550 | | | | | | | start | -p kindnet-20220221084934-6550 | kindnet-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:02:46 UTC | Mon, 21 Feb 2022 09:03:35 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=kindnet --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p kindnet-20220221084934-6550 | kindnet-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:40 UTC | Mon, 21 Feb 2022 09:03:40 UTC | | | pgrep -a kubelet | | | | | | | -p | calico-20220221084934-6550 | calico-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:45 UTC | Mon, 21 Feb 2022 09:03:47 UTC | | | logs -n 25 | | | | | | | delete | -p calico-20220221084934-6550 | calico-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:48 UTC | Mon, 21 Feb 2022 09:03:50 UTC | | -p | auto-20220221084933-6550 logs | auto-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:07:20 UTC | Mon, 21 Feb 2022 09:07:22 UTC | | | -n 25 | | | | | | | delete | -p auto-20220221084933-6550 | auto-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:07:22 UTC | Mon, 21 Feb 2022 09:07:25 UTC | | start | -p | enable-default-cni-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:32 UTC | Mon, 21 Feb 2022 09:08:26 UTC | | | enable-default-cni-20220221084933-6550 | | | | | | | | --memory=2048 --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --enable-default-cni=true | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p | enable-default-cni-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:08:26 UTC | Mon, 21 Feb 2022 09:08:27 UTC | | | enable-default-cni-20220221084933-6550 | | | | | | | | pgrep -a kubelet | | | | | | | start | -p bridge-20220221084933-6550 | bridge-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:51 UTC | Mon, 21 Feb 2022 09:08:41 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=bridge --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p bridge-20220221084933-6550 | bridge-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:08:41 UTC | Mon, 21 Feb 2022 09:08:41 UTC | | | pgrep -a kubelet | | | | | | | -p | kindnet-20220221084934-6550 | kindnet-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:09:44 UTC | Mon, 21 Feb 2022 09:09:45 UTC | | | logs -n 25 | | | | | | | delete | -p kindnet-20220221084934-6550 | kindnet-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:09:45 UTC | Mon, 21 Feb 2022 09:09:48 UTC | | start | -p | old-k8s-version-20220221090948-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:09:48 UTC | Mon, 21 Feb 2022 09:11:57 UTC | | | old-k8s-version-20220221090948-6550 | | | | | | | | --memory=2200 --alsologtostderr | | | | | | | | --wait=true --kvm-network=default | | | | | | | | --kvm-qemu-uri=qemu:///system | | | | | | | | --disable-driver-mounts | | | | | | | | --keep-context=false | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | | --kubernetes-version=v1.16.0 | | | | | | | addons | enable metrics-server -p | old-k8s-version-20220221090948-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:12:06 UTC | Mon, 21 Feb 2022 09:12:06 UTC | | | old-k8s-version-20220221090948-6550 | | | | | | | | --images=MetricsServer=k8s.gcr.io/echoserver:1.4 | | | | | | | | --registries=MetricsServer=fake.domain | | | | | | | start | -p kubenet-20220221084933-6550 | kubenet-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:07:25 UTC | Mon, 21 Feb 2022 09:12:15 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --network-plugin=kubenet | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p kubenet-20220221084933-6550 | kubenet-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:12:15 UTC | Mon, 21 Feb 2022 09:12:15 UTC | | | pgrep -a kubelet | | | | | | | stop | -p | old-k8s-version-20220221090948-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:12:06 UTC | Mon, 21 Feb 2022 09:12:17 UTC | | | old-k8s-version-20220221090948-6550 | | | | | | | | --alsologtostderr -v=3 | | | | | | | addons | enable dashboard -p | old-k8s-version-20220221090948-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:12:17 UTC | Mon, 21 Feb 2022 09:12:17 UTC | | | old-k8s-version-20220221090948-6550 | | | | | | | | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 | | | | | | |---------|---------------------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| * * ==> Last Start <== * Log file created at: 2022/02/21 09:12:18 Running on machine: ubuntu-20-agent-5 Binary: Built with gc go1.17.7 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0221 09:12:18.063563 481686 out.go:297] Setting OutFile to fd 1 ... I0221 09:12:18.063667 481686 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 09:12:18.063680 481686 out.go:310] Setting ErrFile to fd 2... I0221 09:12:18.063686 481686 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 09:12:18.063879 481686 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/bin I0221 09:12:18.064401 481686 out.go:304] Setting JSON to false I0221 09:12:18.066180 481686 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":3292,"bootTime":1645431446,"procs":471,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"} I0221 09:12:18.066283 481686 start.go:122] virtualization: kvm guest I0221 09:12:18.069062 481686 out.go:176] * [old-k8s-version-20220221090948-6550] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64) I0221 09:12:18.070941 481686 out.go:176] - MINIKUBE_LOCATION=13641 I0221 09:12:18.069225 481686 notify.go:193] Checking for updates... I0221 09:12:18.072550 481686 out.go:176] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true I0221 09:12:18.074232 481686 out.go:176] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 09:12:18.075722 481686 out.go:176] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube I0221 09:12:18.077236 481686 out.go:176] - MINIKUBE_BIN=out/minikube-linux-amd64 I0221 09:12:18.077830 481686 config.go:176] Loaded profile config "old-k8s-version-20220221090948-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0 I0221 09:12:18.079929 481686 out.go:176] * Kubernetes 1.23.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.4 I0221 09:12:18.079966 481686 driver.go:344] Setting default libvirt URI to qemu:///system I0221 09:12:18.132076 481686 docker.go:132] docker version: linux-20.10.12 I0221 09:12:18.132199 481686 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 09:12:18.243268 481686 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:49 SystemTime:2022-02-21 09:12:18.170280502 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 09:12:18.243371 481686 docker.go:237] overlay module found I0221 09:12:18.245988 481686 out.go:176] * Using the docker driver based on existing profile I0221 09:12:18.246020 481686 start.go:281] selected driver: docker I0221 09:12:18.246026 481686 start.go:798] validating driver "docker" against &{Name:old-k8s-version-20220221090948-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220221090948-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 09:12:18.246140 481686 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} W0221 09:12:18.246188 481686 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. W0221 09:12:18.246211 481686 out.go:241] ! Your cgroup does not allow setting memory. I0221 09:12:18.247617 481686 out.go:176] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities I0221 09:12:18.248269 481686 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 09:12:18.365356 481686 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:49 SystemTime:2022-02-21 09:12:18.283585104 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} W0221 09:12:18.365474 481686 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. W0221 09:12:18.365497 481686 out.go:241] ! Your cgroup does not allow setting memory. I0221 09:12:18.369348 481686 out.go:176] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities I0221 09:12:18.369446 481686 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] I0221 09:12:18.369488 481686 cni.go:93] Creating CNI manager for "" I0221 09:12:18.369505 481686 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0221 09:12:18.369519 481686 start_flags.go:302] config: {Name:old-k8s-version-20220221090948-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220221090948-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 09:12:18.371447 481686 out.go:176] * Starting control plane node old-k8s-version-20220221090948-6550 in cluster old-k8s-version-20220221090948-6550 I0221 09:12:18.371481 481686 cache.go:120] Beginning downloading kic base image for docker with docker I0221 09:12:18.372838 481686 out.go:176] * Pulling base image ... I0221 09:12:18.372866 481686 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker I0221 09:12:18.372899 481686 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4 I0221 09:12:18.372907 481686 cache.go:57] Caching tarball of preloaded images I0221 09:12:18.372961 481686 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon I0221 09:12:18.373221 481686 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0221 09:12:18.373240 481686 cache.go:60] Finished verifying existence of preloaded tar for v1.16.0 on docker I0221 09:12:18.373359 481686 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/config.json ... I0221 09:12:18.437016 481686 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon, skipping pull I0221 09:12:18.437050 481686 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 exists in daemon, skipping load I0221 09:12:18.437063 481686 cache.go:208] Successfully downloaded all kic artifacts I0221 09:12:18.437096 481686 start.go:313] acquiring machines lock for old-k8s-version-20220221090948-6550: {Name:mkc2c1cda1482e6b6fedc7dd454394ebc20d0304 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:12:18.437205 481686 start.go:317] acquired machines lock for "old-k8s-version-20220221090948-6550" in 82.821µs I0221 09:12:18.437229 481686 start.go:93] Skipping create...Using existing machine configuration I0221 09:12:18.437236 481686 fix.go:55] fixHost starting: I0221 09:12:18.437532 481686 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220221090948-6550 --format={{.State.Status}} I0221 09:12:18.474067 481686 fix.go:108] recreateIfNeeded on old-k8s-version-20220221090948-6550: state=Stopped err= W0221 09:12:18.474097 481686 fix.go:134] unexpected machine state, will restart: I0221 09:12:18.477131 481686 out.go:176] * Restarting existing docker container for "old-k8s-version-20220221090948-6550" ... I0221 09:12:18.477189 481686 cli_runner.go:133] Run: docker start old-k8s-version-20220221090948-6550 I0221 09:12:18.916066 481686 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220221090948-6550 --format={{.State.Status}} I0221 09:12:18.958039 481686 kic.go:420] container "old-k8s-version-20220221090948-6550" state is running. I0221 09:12:18.958636 481686 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220221090948-6550 I0221 09:12:18.997177 481686 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/config.json ... I0221 09:12:18.997400 481686 machine.go:88] provisioning docker machine ... I0221 09:12:18.997429 481686 ubuntu.go:169] provisioning hostname "old-k8s-version-20220221090948-6550" I0221 09:12:18.997463 481686 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220221090948-6550 I0221 09:12:19.046081 481686 main.go:130] libmachine: Using SSH client type: native I0221 09:12:19.046324 481686 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49409 } I0221 09:12:19.046353 481686 main.go:130] libmachine: About to run SSH command: sudo hostname old-k8s-version-20220221090948-6550 && echo "old-k8s-version-20220221090948-6550" | sudo tee /etc/hostname I0221 09:12:19.047041 481686 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45212->127.0.0.1:49409: read: connection reset by peer I0221 09:12:22.179930 481686 main.go:130] libmachine: SSH cmd err, output: : old-k8s-version-20220221090948-6550 I0221 09:12:22.180015 481686 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220221090948-6550 I0221 09:12:22.214794 481686 main.go:130] libmachine: Using SSH client type: native I0221 09:12:22.214943 481686 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49409 } I0221 09:12:22.214963 481686 main.go:130] libmachine: About to run SSH command: if ! grep -xq '.*\sold-k8s-version-20220221090948-6550' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220221090948-6550/g' /etc/hosts; else echo '127.0.1.1 old-k8s-version-20220221090948-6550' | sudo tee -a /etc/hosts; fi fi I0221 09:12:22.338910 481686 main.go:130] libmachine: SSH cmd err, output: : I0221 09:12:22.338956 481686 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube} I0221 09:12:22.339028 481686 ubuntu.go:177] setting up certificates I0221 09:12:22.339043 481686 provision.go:83] configureAuth start I0221 09:12:22.339106 481686 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220221090948-6550 I0221 09:12:22.372440 481686 provision.go:138] copyHostCerts I0221 09:12:22.372507 481686 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem, removing ... I0221 09:12:22.372520 481686 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem I0221 09:12:22.372590 481686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem (1082 bytes) I0221 09:12:22.372706 481686 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem, removing ... I0221 09:12:22.372722 481686 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem I0221 09:12:22.372750 481686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem (1123 bytes) I0221 09:12:22.372831 481686 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem, removing ... I0221 09:12:22.372844 481686 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem I0221 09:12:22.372873 481686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem (1675 bytes) I0221 09:12:22.372945 481686 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220221090948-6550 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220221090948-6550] I0221 09:12:22.657456 481686 provision.go:172] copyRemoteCerts I0221 09:12:22.657524 481686 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0221 09:12:22.657556 481686 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220221090948-6550 I0221 09:12:22.691986 481686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49409 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/old-k8s-version-20220221090948-6550/id_rsa Username:docker} I0221 09:12:22.778536 481686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes) I0221 09:12:22.796303 481686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0221 09:12:22.813782 481686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes) I0221 09:12:22.831458 481686 provision.go:86] duration metric: configureAuth took 492.398552ms I0221 09:12:22.831489 481686 ubuntu.go:193] setting minikube options for container-runtime I0221 09:12:22.831672 481686 config.go:176] Loaded profile config "old-k8s-version-20220221090948-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0 I0221 09:12:22.831714 481686 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220221090948-6550 I0221 09:12:22.865147 481686 main.go:130] libmachine: Using SSH client type: native I0221 09:12:22.865310 481686 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49409 } I0221 09:12:22.865323 481686 main.go:130] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0221 09:12:22.987160 481686 main.go:130] libmachine: SSH cmd err, output: : overlay I0221 09:12:22.987181 481686 ubuntu.go:71] root file system type: overlay I0221 09:12:22.987381 481686 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0221 09:12:22.987440 481686 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220221090948-6550 I0221 09:12:23.022112 481686 main.go:130] libmachine: Using SSH client type: native I0221 09:12:23.022272 481686 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49409 } I0221 09:12:23.022370 481686 main.go:130] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0221 09:12:23.151950 481686 main.go:130] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0221 09:12:23.152020 481686 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220221090948-6550 I0221 09:12:23.186215 481686 main.go:130] libmachine: Using SSH client type: native I0221 09:12:23.186462 481686 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49409 } I0221 09:12:23.186483 481686 main.go:130] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0221 09:12:23.311126 481686 main.go:130] libmachine: SSH cmd err, output: : I0221 09:12:23.311159 481686 machine.go:91] provisioned docker machine in 4.313744156s I0221 09:12:23.311168 481686 start.go:267] post-start starting for "old-k8s-version-20220221090948-6550" (driver="docker") I0221 09:12:23.311173 481686 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0221 09:12:23.311226 481686 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0221 09:12:23.311263 481686 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220221090948-6550 I0221 09:12:23.345260 481686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49409 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/old-k8s-version-20220221090948-6550/id_rsa Username:docker} I0221 09:12:23.435066 481686 ssh_runner.go:195] Run: cat /etc/os-release I0221 09:12:23.438889 481686 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0221 09:12:23.438921 481686 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0221 09:12:23.438935 481686 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0221 09:12:23.438941 481686 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0221 09:12:23.438952 481686 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/addons for local assets ... I0221 09:12:23.439058 481686 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files for local assets ... I0221 09:12:23.439165 481686 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem -> 65502.pem in /etc/ssl/certs I0221 09:12:23.439290 481686 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs I0221 09:12:23.446645 481686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /etc/ssl/certs/65502.pem (1708 bytes) I0221 09:12:23.464506 481686 start.go:270] post-start completed in 153.326516ms I0221 09:12:23.464566 481686 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0221 09:12:23.464602 481686 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220221090948-6550 I0221 09:12:23.498324 481686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49409 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/old-k8s-version-20220221090948-6550/id_rsa Username:docker} I0221 09:12:23.587416 481686 fix.go:57] fixHost completed within 5.15017508s I0221 09:12:23.587444 481686 start.go:80] releasing machines lock for "old-k8s-version-20220221090948-6550", held for 5.15022486s I0221 09:12:23.587526 481686 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220221090948-6550 I0221 09:12:23.621246 481686 ssh_runner.go:195] Run: systemctl --version I0221 09:12:23.621295 481686 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/ I0221 09:12:23.621306 481686 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220221090948-6550 I0221 09:12:23.621335 481686 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220221090948-6550 I0221 09:12:23.658227 481686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49409 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/old-k8s-version-20220221090948-6550/id_rsa Username:docker} I0221 09:12:23.660144 481686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49409 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/old-k8s-version-20220221090948-6550/id_rsa Username:docker} I0221 09:12:23.890444 481686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0221 09:12:23.902665 481686 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 09:12:23.912077 481686 cruntime.go:272] skipping containerd shutdown because we are bound to it I0221 09:12:23.912502 481686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0221 09:12:23.922862 481686 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0221 09:12:23.935994 481686 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0221 09:12:24.015940 481686 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0221 09:12:24.095712 481686 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 09:12:24.106277 481686 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0221 09:12:24.184533 481686 ssh_runner.go:195] Run: sudo systemctl start docker I0221 09:12:24.194410 481686 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 09:12:24.235854 481686 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 09:12:24.278883 481686 out.go:203] * Preparing Kubernetes v1.16.0 on Docker 20.10.12 ... I0221 09:12:24.278954 481686 cli_runner.go:133] Run: docker network inspect old-k8s-version-20220221090948-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 09:12:24.313540 481686 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts I0221 09:12:24.317078 481686 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 09:12:24.328491 481686 out.go:176] - kubelet.housekeeping-interval=5m I0221 09:12:24.328555 481686 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker I0221 09:12:24.328601 481686 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0221 09:12:24.364128 481686 docker.go:606] Got preloaded images: -- stdout -- kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 k8s.gcr.io/kube-apiserver:v1.16.0 k8s.gcr.io/kube-controller-manager:v1.16.0 k8s.gcr.io/kube-proxy:v1.16.0 k8s.gcr.io/kube-scheduler:v1.16.0 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2 gcr.io/k8s-minikube/busybox:1.28.4-glibc k8s.gcr.io/pause:3.1 -- /stdout -- I0221 09:12:24.364151 481686 docker.go:537] Images already preloaded, skipping extraction I0221 09:12:24.364203 481686 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0221 09:12:24.399461 481686 docker.go:606] Got preloaded images: -- stdout -- kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 k8s.gcr.io/kube-apiserver:v1.16.0 k8s.gcr.io/kube-proxy:v1.16.0 k8s.gcr.io/kube-controller-manager:v1.16.0 k8s.gcr.io/kube-scheduler:v1.16.0 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2 gcr.io/k8s-minikube/busybox:1.28.4-glibc k8s.gcr.io/pause:3.1 -- /stdout -- I0221 09:12:24.399490 481686 cache_images.go:84] Images are preloaded, skipping loading I0221 09:12:24.399541 481686 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0221 09:12:24.486001 481686 cni.go:93] Creating CNI manager for "" I0221 09:12:24.486035 481686 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0221 09:12:24.486052 481686 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0221 09:12:24.486070 481686 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220221090948-6550 NodeName:old-k8s-version-20220221090948-6550 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0221 09:12:24.486248 481686 kubeadm.go:162] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta1 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "old-k8s-version-20220221090948-6550" kubeletExtraArgs: node-ip: 192.168.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: old-k8s-version-20220221090948-6550 controlPlaneEndpoint: control-plane.minikube.internal:8443 dns: type: CoreDNS etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: listen-metrics-urls: http://127.0.0.1:2381,http://192.168.49.2:2381 kubernetesVersion: v1.16.0 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0221 09:12:24.486349 481686 kubeadm.go:936] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220221090948-6550 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 [Install] config: {KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220221090948-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0221 09:12:24.486406 481686 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0 I0221 09:12:24.493584 481686 binaries.go:44] Found k8s binaries, skipping transfer I0221 09:12:24.493638 481686 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0221 09:12:24.500464 481686 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes) I0221 09:12:24.514267 481686 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0221 09:12:24.527574 481686 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes) I0221 09:12:24.540712 481686 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0221 09:12:24.543820 481686 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 09:12:24.553284 481686 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550 for IP: 192.168.49.2 I0221 09:12:24.553402 481686 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key I0221 09:12:24.553455 481686 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key I0221 09:12:24.553547 481686 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.key I0221 09:12:24.553629 481686 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/apiserver.key.dd3b5fb2 I0221 09:12:24.553681 481686 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/proxy-client.key I0221 09:12:24.553795 481686 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem (1338 bytes) W0221 09:12:24.553832 481686 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550_empty.pem, impossibly tiny 0 bytes I0221 09:12:24.553848 481686 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem (1679 bytes) I0221 09:12:24.553887 481686 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem (1082 bytes) I0221 09:12:24.553918 481686 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem (1123 bytes) I0221 09:12:24.553962 481686 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem (1675 bytes) I0221 09:12:24.554056 481686 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem (1708 bytes) I0221 09:12:24.555294 481686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0221 09:12:24.573861 481686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0221 09:12:24.591640 481686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0221 09:12:24.609765 481686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I0221 09:12:24.628088 481686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0221 09:12:24.645704 481686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0221 09:12:24.663625 481686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0221 09:12:24.681704 481686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0221 09:12:24.699295 481686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /usr/share/ca-certificates/65502.pem (1708 bytes) I0221 09:12:24.716958 481686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0221 09:12:24.735157 481686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem --> /usr/share/ca-certificates/6550.pem (1338 bytes) I0221 09:12:24.753362 481686 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0221 09:12:24.766918 481686 ssh_runner.go:195] Run: openssl version I0221 09:12:24.772057 481686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65502.pem && ln -fs /usr/share/ca-certificates/65502.pem /etc/ssl/certs/65502.pem" I0221 09:12:24.780093 481686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65502.pem I0221 09:12:24.783295 481686 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Feb 21 08:30 /usr/share/ca-certificates/65502.pem I0221 09:12:24.783344 481686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65502.pem I0221 09:12:24.788424 481686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65502.pem /etc/ssl/certs/3ec20f2e.0" I0221 09:12:24.795451 481686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0221 09:12:24.803050 481686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0221 09:12:24.806096 481686 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb 21 08:26 /usr/share/ca-certificates/minikubeCA.pem I0221 09:12:24.806134 481686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0221 09:12:24.810891 481686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0221 09:12:24.817716 481686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6550.pem && ln -fs /usr/share/ca-certificates/6550.pem /etc/ssl/certs/6550.pem" I0221 09:12:24.825395 481686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6550.pem I0221 09:12:24.828413 481686 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Feb 21 08:30 /usr/share/ca-certificates/6550.pem I0221 09:12:24.828454 481686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6550.pem I0221 09:12:24.833530 481686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6550.pem /etc/ssl/certs/51391683.0" I0221 09:12:24.840555 481686 kubeadm.go:391] StartCluster: {Name:old-k8s-version-20220221090948-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220221090948-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 09:12:24.840674 481686 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0221 09:12:24.873394 481686 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0221 09:12:24.881295 481686 kubeadm.go:402] found existing configuration files, will attempt cluster restart I0221 09:12:24.881322 481686 kubeadm.go:601] restartCluster start I0221 09:12:24.881365 481686 ssh_runner.go:195] Run: sudo test -d /data/minikube I0221 09:12:24.888139 481686 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1 stdout: stderr: I0221 09:12:24.889073 481686 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20220221090948-6550" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 09:12:24.889498 481686 kubeconfig.go:127] "old-k8s-version-20220221090948-6550" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig - will repair! I0221 09:12:24.890203 481686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig: {Name:mkbaf7b3f4ffbe8a9d57707d423380523fa909e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:12:24.892424 481686 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new I0221 09:12:24.899475 481686 api_server.go:165] Checking apiserver status ... I0221 09:12:24.899523 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:12:24.913455 481686 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:12:25.113895 481686 api_server.go:165] Checking apiserver status ... I0221 09:12:25.113968 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:12:25.128519 481686 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:12:25.313555 481686 api_server.go:165] Checking apiserver status ... I0221 09:12:25.313624 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:12:25.328255 481686 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:12:25.514557 481686 api_server.go:165] Checking apiserver status ... I0221 09:12:25.514685 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:12:25.529308 481686 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:12:25.714548 481686 api_server.go:165] Checking apiserver status ... I0221 09:12:25.714633 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:12:25.729538 481686 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:12:25.913691 481686 api_server.go:165] Checking apiserver status ... I0221 09:12:25.913755 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:12:25.928505 481686 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:12:26.113748 481686 api_server.go:165] Checking apiserver status ... I0221 09:12:26.113812 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:12:26.128291 481686 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:12:26.314571 481686 api_server.go:165] Checking apiserver status ... I0221 09:12:26.314664 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:12:26.328927 481686 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:12:26.514257 481686 api_server.go:165] Checking apiserver status ... I0221 09:12:26.514328 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:12:26.529228 481686 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:12:26.714529 481686 api_server.go:165] Checking apiserver status ... I0221 09:12:26.714601 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:12:26.729366 481686 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:12:26.913603 481686 api_server.go:165] Checking apiserver status ... I0221 09:12:26.913680 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:12:26.928318 481686 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:12:27.114563 481686 api_server.go:165] Checking apiserver status ... I0221 09:12:27.114634 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:12:27.129974 481686 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:12:27.314287 481686 api_server.go:165] Checking apiserver status ... I0221 09:12:27.314379 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:12:27.328939 481686 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:12:27.514132 481686 api_server.go:165] Checking apiserver status ... I0221 09:12:27.514234 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:12:27.528968 481686 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:12:27.714191 481686 api_server.go:165] Checking apiserver status ... I0221 09:12:27.714255 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:12:27.728825 481686 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:12:27.914122 481686 api_server.go:165] Checking apiserver status ... I0221 09:12:27.914198 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:12:27.928679 481686 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:12:27.928702 481686 api_server.go:165] Checking apiserver status ... I0221 09:12:27.928734 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:12:27.942333 481686 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:12:27.942363 481686 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition I0221 09:12:27.942370 481686 kubeadm.go:1067] stopping kube-system containers ... I0221 09:12:27.942413 481686 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0221 09:12:27.977250 481686 docker.go:438] Stopping containers: [ad0e124a6147 6ac3974c8e9e 19594b2a5b28 fe46eea790da 494b0840ef1b 294e5c15540f 67385820bcc2 f10e557d91a5 d0ae540750ea 93c6a46109d3 5d114ac431ec 00310aa9fd81 d7e39eddf339 6f822b6e43e7] I0221 09:12:27.977321 481686 ssh_runner.go:195] Run: docker stop ad0e124a6147 6ac3974c8e9e 19594b2a5b28 fe46eea790da 494b0840ef1b 294e5c15540f 67385820bcc2 f10e557d91a5 d0ae540750ea 93c6a46109d3 5d114ac431ec 00310aa9fd81 d7e39eddf339 6f822b6e43e7 I0221 09:12:28.014554 481686 ssh_runner.go:195] Run: sudo systemctl stop kubelet I0221 09:12:28.024919 481686 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0221 09:12:28.032039 481686 kubeadm.go:155] found existing configuration files: -rw------- 1 root root 5747 Feb 21 09:10 /etc/kubernetes/admin.conf -rw------- 1 root root 5783 Feb 21 09:10 /etc/kubernetes/controller-manager.conf -rw------- 1 root root 5919 Feb 21 09:10 /etc/kubernetes/kubelet.conf -rw------- 1 root root 5731 Feb 21 09:10 /etc/kubernetes/scheduler.conf I0221 09:12:28.032102 481686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf I0221 09:12:28.038923 481686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf I0221 09:12:28.045850 481686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf I0221 09:12:28.052684 481686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf I0221 09:12:28.059412 481686 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0221 09:12:28.066289 481686 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml I0221 09:12:28.066315 481686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml" I0221 09:12:28.118534 481686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml" I0221 09:12:28.863240 481686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml" I0221 09:12:29.097233 481686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml" I0221 09:12:29.156501 481686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml" I0221 09:12:29.244710 481686 api_server.go:51] waiting for apiserver process to appear ... I0221 09:12:29.244765 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0221 09:12:29.760000 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0221 09:12:30.260268 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0221 09:12:30.759711 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0221 09:12:30.824845 481686 api_server.go:71] duration metric: took 1.580135004s to wait for apiserver process to appear ... I0221 09:12:30.824881 481686 api_server.go:87] waiting for apiserver healthz status ... I0221 09:12:30.824894 481686 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0221 09:12:35.260175 481686 api_server.go:266] https://192.168.49.2:8443/healthz returned 403: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} W0221 09:12:35.260248 481686 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 403: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} I0221 09:12:35.760929 481686 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0221 09:12:35.765652 481686 api_server.go:266] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld [-]poststarthook/ca-registration failed: reason withheld [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed W0221 09:12:35.765674 481686 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld [-]poststarthook/ca-registration failed: reason withheld [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed I0221 09:12:36.260914 481686 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0221 09:12:36.307534 481686 api_server.go:266] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld [+]poststarthook/ca-registration ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed W0221 09:12:36.307571 481686 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld [+]poststarthook/ca-registration ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed I0221 09:12:36.760752 481686 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0221 09:12:36.808563 481686 api_server.go:266] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/ca-registration ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed W0221 09:12:36.808654 481686 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/ca-registration ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed I0221 09:12:37.260827 481686 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0221 09:12:37.266069 481686 api_server.go:266] https://192.168.49.2:8443/healthz returned 200: ok I0221 09:12:37.272355 481686 api_server.go:140] control plane version: v1.16.0 I0221 09:12:37.272382 481686 api_server.go:130] duration metric: took 6.447494019s to wait for apiserver health ... I0221 09:12:37.272396 481686 cni.go:93] Creating CNI manager for "" I0221 09:12:37.272404 481686 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0221 09:12:37.272414 481686 system_pods.go:43] waiting for kube-system pods to appear ... I0221 09:12:37.282611 481686 system_pods.go:59] 7 kube-system pods found I0221 09:12:37.282652 481686 system_pods.go:61] "coredns-5644d7b6d9-4jfjr" [445faf1b-887e-484a-bb35-92f88222e76b] Running I0221 09:12:37.282658 481686 system_pods.go:61] "etcd-old-k8s-version-20220221090948-6550" [b3071ff1-0324-474f-ab77-8fd44e1ebc83] Running I0221 09:12:37.282662 481686 system_pods.go:61] "kube-apiserver-old-k8s-version-20220221090948-6550" [708fda44-6a97-49f1-95b0-7cc9c9d7ac36] Running I0221 09:12:37.282665 481686 system_pods.go:61] "kube-controller-manager-old-k8s-version-20220221090948-6550" [27071ca4-76ac-4233-8ab3-79113ba20d1f] Running I0221 09:12:37.282669 481686 system_pods.go:61] "kube-proxy-tdxwc" [486ca50e-8d88-462f-ab2b-90c0b323fee8] Running I0221 09:12:37.282674 481686 system_pods.go:61] "kube-scheduler-old-k8s-version-20220221090948-6550" [337538dd-9afc-4bc6-8bea-2b54c6104252] Running I0221 09:12:37.282677 481686 system_pods.go:61] "storage-provisioner" [acc16a62-19b6-4669-88e9-91a96f7d0f59] Running I0221 09:12:37.282682 481686 system_pods.go:74] duration metric: took 10.258953ms to wait for pod list to return data ... I0221 09:12:37.282691 481686 node_conditions.go:102] verifying NodePressure condition ... I0221 09:12:37.286204 481686 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki I0221 09:12:37.286240 481686 node_conditions.go:123] node cpu capacity is 8 I0221 09:12:37.286253 481686 node_conditions.go:105] duration metric: took 3.557872ms to run NodePressure ... I0221 09:12:37.286273 481686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml" I0221 09:12:37.453002 481686 kubeadm.go:737] waiting for restarted kubelet to initialise ... I0221 09:12:37.456863 481686 retry.go:31] will retry after 276.165072ms: kubelet not initialised I0221 09:12:37.736865 481686 retry.go:31] will retry after 540.190908ms: kubelet not initialised I0221 09:12:38.280617 481686 retry.go:31] will retry after 655.06503ms: kubelet not initialised I0221 09:12:38.939642 481686 retry.go:31] will retry after 791.196345ms: kubelet not initialised I0221 09:12:39.735022 481686 retry.go:31] will retry after 1.170244332s: kubelet not initialised I0221 09:12:40.909813 481686 retry.go:31] will retry after 2.253109428s: kubelet not initialised I0221 09:12:43.166877 481686 retry.go:31] will retry after 1.610739793s: kubelet not initialised I0221 09:12:44.782170 481686 retry.go:31] will retry after 2.804311738s: kubelet not initialised I0221 09:12:47.591132 481686 retry.go:31] will retry after 3.824918958s: kubelet not initialised I0221 09:12:51.421422 481686 retry.go:31] will retry after 7.69743562s: kubelet not initialised I0221 09:12:59.122620 481686 retry.go:31] will retry after 14.635568968s: kubelet not initialised I0221 09:13:13.762364 481686 kubeadm.go:752] kubelet initialised I0221 09:13:13.762387 481686 kubeadm.go:753] duration metric: took 36.309357684s waiting for restarted kubelet to initialise ... I0221 09:13:13.762394 481686 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 09:13:13.765803 481686 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-4jfjr" in "kube-system" namespace to be "Ready" ... I0221 09:13:13.773307 481686 pod_ready.go:92] pod "coredns-5644d7b6d9-4jfjr" in "kube-system" namespace has status "Ready":"True" I0221 09:13:13.773329 481686 pod_ready.go:81] duration metric: took 7.502811ms waiting for pod "coredns-5644d7b6d9-4jfjr" in "kube-system" namespace to be "Ready" ... I0221 09:13:13.773338 481686 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-vqqfc" in "kube-system" namespace to be "Ready" ... I0221 09:13:13.776551 481686 pod_ready.go:92] pod "coredns-5644d7b6d9-vqqfc" in "kube-system" namespace has status "Ready":"True" I0221 09:13:13.776568 481686 pod_ready.go:81] duration metric: took 3.225081ms waiting for pod "coredns-5644d7b6d9-vqqfc" in "kube-system" namespace to be "Ready" ... I0221 09:13:13.776577 481686 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-20220221090948-6550" in "kube-system" namespace to be "Ready" ... I0221 09:13:13.779686 481686 pod_ready.go:92] pod "etcd-old-k8s-version-20220221090948-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:13:13.779705 481686 pod_ready.go:81] duration metric: took 3.121899ms waiting for pod "etcd-old-k8s-version-20220221090948-6550" in "kube-system" namespace to be "Ready" ... I0221 09:13:13.779718 481686 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-20220221090948-6550" in "kube-system" namespace to be "Ready" ... I0221 09:13:13.782821 481686 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-20220221090948-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:13:13.782840 481686 pod_ready.go:81] duration metric: took 3.114979ms waiting for pod "kube-apiserver-old-k8s-version-20220221090948-6550" in "kube-system" namespace to be "Ready" ... I0221 09:13:13.782849 481686 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-20220221090948-6550" in "kube-system" namespace to be "Ready" ... I0221 09:13:14.161532 481686 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-20220221090948-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:13:14.161557 481686 pod_ready.go:81] duration metric: took 378.700547ms waiting for pod "kube-controller-manager-old-k8s-version-20220221090948-6550" in "kube-system" namespace to be "Ready" ... I0221 09:13:14.161570 481686 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tdxwc" in "kube-system" namespace to be "Ready" ... I0221 09:13:14.561430 481686 pod_ready.go:92] pod "kube-proxy-tdxwc" in "kube-system" namespace has status "Ready":"True" I0221 09:13:14.561454 481686 pod_ready.go:81] duration metric: took 399.878102ms waiting for pod "kube-proxy-tdxwc" in "kube-system" namespace to be "Ready" ... I0221 09:13:14.561463 481686 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-20220221090948-6550" in "kube-system" namespace to be "Ready" ... I0221 09:13:14.962123 481686 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-20220221090948-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:13:14.962149 481686 pod_ready.go:81] duration metric: took 400.67974ms waiting for pod "kube-scheduler-old-k8s-version-20220221090948-6550" in "kube-system" namespace to be "Ready" ... I0221 09:13:14.962160 481686 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace to be "Ready" ... I0221 09:13:17.367179 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:13:19.367534 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:13:21.866320 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:13:23.867171 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:13:26.367049 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:13:28.367351 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:13:30.866816 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" * * ==> Docker <== * -- Logs begin at Mon 2022-02-21 09:04:01 UTC, end at Mon 2022-02-21 09:13:35 UTC. -- Feb 21 09:04:03 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:04:03.680475703Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 21 09:04:03 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:04:03.681624533Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 21 09:04:03 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:04:03.681657617Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 21 09:04:03 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:04:03.681680124Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Feb 21 09:04:03 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:04:03.681693834Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 21 09:04:03 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:04:03.687754849Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Feb 21 09:04:03 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:04:03.693110176Z" level=warning msg="Your kernel does not support CPU realtime scheduler" Feb 21 09:04:03 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:04:03.693139050Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 21 09:04:03 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:04:03.693144878Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 21 09:04:03 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:04:03.693335705Z" level=info msg="Loading containers: start." Feb 21 09:04:03 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:04:03.777936498Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 21 09:04:03 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:04:03.813301783Z" level=info msg="Loading containers: done." Feb 21 09:04:03 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:04:03.824636992Z" level=info msg="Docker daemon" commit=459d0df graphdriver(s)=overlay2 version=20.10.12 Feb 21 09:04:03 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:04:03.824690941Z" level=info msg="Daemon has completed initialization" Feb 21 09:04:03 bridge-20220221084933-6550 systemd[1]: Started Docker Application Container Engine. Feb 21 09:04:03 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:04:03.843099718Z" level=info msg="API listen on [::]:2376" Feb 21 09:04:03 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:04:03.846792381Z" level=info msg="API listen on /var/run/docker.sock" Feb 21 09:04:41 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:04:41.138003188Z" level=info msg="ignoring event" container=a0be41a7d766cdaba9403bf9df8395ee04391f81bc4dbd0908e9d6ec829fc323 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:04:41 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:04:41.253100899Z" level=info msg="ignoring event" container=8a472a83eaf77bf4ed3c47adea9a900aa30c9f51e075e8c13198eae504dd5135 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:05:02 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:05:02.429884245Z" level=info msg="ignoring event" container=b217dfe43376c251bb43088d9560ae3139c324922c017d0a0045ec73b8ca947a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:05:32 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:05:32.908656803Z" level=info msg="ignoring event" container=8e3788818a6b1aae56233b447d95584be4c66937b500037d196c1e07e84f5828 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:06:17 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:06:17.566902390Z" level=info msg="ignoring event" container=bc46acfa3d7c866121ea03403a121b9e648442fceffd9a6a32c9256973a09d29 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:07:13 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:07:13.543450894Z" level=info msg="ignoring event" container=40d03e6cd1a30b184ea894fccdefa5fdc7c1bc310d94a582d45c491c646f47ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:08:34 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:08:34.577542651Z" level=info msg="ignoring event" container=dedfecc4ece76a44315ebba0e63995f63460bc4dc34f01432953ce831b08926f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:10:38 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:10:38.559805431Z" level=info msg="ignoring event" container=293c64d3f2e2a9f95c0d0cefe6c39000de6c3e12dcd13952722676d0fdea03ae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID e990cc7800b7c 6e38f40d628db 4 seconds ago Running storage-provisioner 6 58296d2ef92ae 293c64d3f2e2a 6e38f40d628db 3 minutes ago Exited storage-provisioner 5 58296d2ef92ae 4c6fcccfa1394 k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 4 minutes ago Running dnsutils 0 74ab993c5eef1 8eb32092067f9 a4ca41631cc7a 9 minutes ago Running coredns 0 b299fa78d336f cd31aa9c0c743 2114245ec4d6b 9 minutes ago Running kube-proxy 0 e5bc271195fab d092f7171bc6a 25444908517a5 9 minutes ago Running kube-controller-manager 0 79155ed30105b 6e69145b30ada aceacb6244f9f 9 minutes ago Running kube-scheduler 0 718e986929bb6 5eb857f7738e9 25f8c7f3da61c 9 minutes ago Running etcd 0 0691551fcb0ea 6a850a90d786b 62930710c9634 9 minutes ago Running kube-apiserver 0 3d86608597cbc * * ==> coredns [8eb32092067f] <== * [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" * * ==> describe nodes <== * Name: bridge-20220221084933-6550 Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=bridge-20220221084933-6550 kubernetes.io/os=linux minikube.k8s.io/commit=9a4c39b14752448ee8b5da7f8bb397c2a16d9ea9 minikube.k8s.io/name=bridge-20220221084933-6550 minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2022_02_21T09_04_17_0700 minikube.k8s.io/version=v1.25.1 node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 21 Feb 2022 09:04:13 +0000 Taints: Unschedulable: false Lease: HolderIdentity: bridge-20220221084933-6550 AcquireTime: RenewTime: Mon, 21 Feb 2022 09:13:28 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Mon, 21 Feb 2022 09:08:53 +0000 Mon, 21 Feb 2022 09:04:10 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Mon, 21 Feb 2022 09:08:53 +0000 Mon, 21 Feb 2022 09:04:10 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Mon, 21 Feb 2022 09:08:53 +0000 Mon, 21 Feb 2022 09:04:10 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Mon, 21 Feb 2022 09:08:53 +0000 Mon, 21 Feb 2022 09:04:27 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.67.2 Hostname: bridge-20220221084933-6550 Capacity: cpu: 8 ephemeral-storage: 304695084Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32874648Ki pods: 110 Allocatable: cpu: 8 ephemeral-storage: 304695084Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32874648Ki pods: 110 System Info: Machine ID: b6a262faae404a5db719705fd34b5c8b System UUID: f05716f6-a1c5-4503-b665-f7090020f00e Boot ID: 36f9c729-2a96-4807-bb74-314dc2113999 Kernel Version: 5.11.0-1029-gcp OS Image: Ubuntu 20.04.2 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.12 Kubelet Version: v1.23.4 Kube-Proxy Version: v1.23.4 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (8 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- default netcat-668db85669-f2pzb 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m53s kube-system coredns-64897985d-7jshp 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 9m6s kube-system etcd-bridge-20220221084933-6550 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 9m18s kube-system kube-apiserver-bridge-20220221084933-6550 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9m18s kube-system kube-controller-manager-bridge-20220221084933-6550 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9m18s kube-system kube-proxy-pzvfl 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9m6s kube-system kube-scheduler-bridge-20220221084933-6550 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9m18s kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9m4s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 750m (9%!)(MISSING) 0 (0%!)(MISSING) memory 170Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 9m5s kube-proxy Normal NodeHasSufficientMemory 9m18s kubelet Node bridge-20220221084933-6550 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 9m18s kubelet Node bridge-20220221084933-6550 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 9m18s kubelet Node bridge-20220221084933-6550 status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 9m18s kubelet Updated Node Allocatable limit across pods Normal Starting 9m18s kubelet Starting kubelet. Normal NodeReady 9m8s kubelet Node bridge-20220221084933-6550 status is now: NodeReady * * ==> dmesg <== * [ +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [Feb21 09:13] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +1.015846] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000013] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +1.027979] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +16.774814] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 [ +1.011852] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 [ +1.023907] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 [ +2.959842] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000008] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 [ +1.007853] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 [ +1.027910] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000008] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 [ +2.963841] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 [ +1.035853] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 [ +1.023933] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 * * ==> etcd [5eb857f7738e] <== * {"level":"info","ts":"2022-02-21T09:04:11.122Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"} {"level":"info","ts":"2022-02-21T09:04:11.122Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"} {"level":"info","ts":"2022-02-21T09:04:11.122Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"} {"level":"info","ts":"2022-02-21T09:04:11.122Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T09:04:11.123Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T09:04:11.123Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T09:04:11.123Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T09:04:11.123Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:bridge-20220221084933-6550 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"} {"level":"info","ts":"2022-02-21T09:04:11.123Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2022-02-21T09:04:11.123Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"} {"level":"info","ts":"2022-02-21T09:04:11.124Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"} {"level":"info","ts":"2022-02-21T09:04:11.123Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2022-02-21T09:04:11.125Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"} {"level":"info","ts":"2022-02-21T09:04:11.125Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"} {"level":"warn","ts":"2022-02-21T09:08:46.315Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"175.262593ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/\" range_end:\"/registry/endpointslices0\" count_only:true ","response":"range_response_count:0 size:7"} {"level":"info","ts":"2022-02-21T09:08:46.315Z","caller":"traceutil/trace.go:171","msg":"trace[1821785760] range","detail":"{range_begin:/registry/endpointslices/; range_end:/registry/endpointslices0; response_count:0; response_revision:603; }","duration":"175.39819ms","start":"2022-02-21T09:08:46.140Z","end":"2022-02-21T09:08:46.315Z","steps":["trace[1821785760] 'count revisions from in-memory index tree' (duration: 175.159153ms)"],"step_count":1} {"level":"warn","ts":"2022-02-21T09:08:46.315Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"185.499053ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:1 size:2539"} {"level":"info","ts":"2022-02-21T09:08:46.316Z","caller":"traceutil/trace.go:171","msg":"trace[653792341] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:1; response_revision:603; }","duration":"185.837506ms","start":"2022-02-21T09:08:46.130Z","end":"2022-02-21T09:08:46.316Z","steps":["trace[653792341] 'range keys from in-memory index tree' (duration: 185.385856ms)"],"step_count":1} {"level":"info","ts":"2022-02-21T09:08:46.522Z","caller":"traceutil/trace.go:171","msg":"trace[142678578] transaction","detail":"{read_only:false; response_revision:605; number_of_response:1; }","duration":"103.930628ms","start":"2022-02-21T09:08:46.418Z","end":"2022-02-21T09:08:46.522Z","steps":["trace[142678578] 'process raft request' (duration: 103.732829ms)"],"step_count":1} {"level":"info","ts":"2022-02-21T09:09:55.382Z","caller":"traceutil/trace.go:171","msg":"trace[2093338953] linearizableReadLoop","detail":"{readStateIndex:713; appliedIndex:712; }","duration":"151.068301ms","start":"2022-02-21T09:09:55.231Z","end":"2022-02-21T09:09:55.382Z","steps":["trace[2093338953] 'read index received' (duration: 52.660289ms)","trace[2093338953] 'applied index is now lower than readState.Index' (duration: 98.407058ms)"],"step_count":2} {"level":"info","ts":"2022-02-21T09:09:55.382Z","caller":"traceutil/trace.go:171","msg":"trace[419954547] transaction","detail":"{read_only:false; response_revision:631; number_of_response:1; }","duration":"232.574305ms","start":"2022-02-21T09:09:55.150Z","end":"2022-02-21T09:09:55.382Z","steps":["trace[419954547] 'process raft request' (duration: 134.193512ms)","trace[419954547] 'compare' (duration: 98.260909ms)"],"step_count":2} {"level":"warn","ts":"2022-02-21T09:09:55.382Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"151.218895ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" count_only:true ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-02-21T09:09:55.382Z","caller":"traceutil/trace.go:171","msg":"trace[1744173959] range","detail":"{range_begin:/registry/networkpolicies/; range_end:/registry/networkpolicies0; response_count:0; response_revision:631; }","duration":"151.266588ms","start":"2022-02-21T09:09:55.231Z","end":"2022-02-21T09:09:55.382Z","steps":["trace[1744173959] 'agreement among raft nodes before linearized reading' (duration: 151.177123ms)"],"step_count":1} {"level":"warn","ts":"2022-02-21T09:09:55.697Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"188.096018ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-02-21T09:09:55.697Z","caller":"traceutil/trace.go:171","msg":"trace[2037857497] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:631; }","duration":"188.171073ms","start":"2022-02-21T09:09:55.509Z","end":"2022-02-21T09:09:55.697Z","steps":["trace[2037857497] 'range keys from in-memory index tree' (duration: 188.020649ms)"],"step_count":1} * * ==> kernel <== * 09:13:35 up 56 min, 0 users, load average: 0.49, 1.43, 2.40 Linux bridge-20220221084933-6550 5.11.0-1029-gcp #33~20.04.3-Ubuntu SMP Tue Jan 18 12:03:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.2 LTS" * * ==> kube-apiserver [6a850a90d786] <== * I0221 09:04:13.663490 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller I0221 09:04:13.663528 1 apf_controller.go:322] Running API Priority and Fairness config worker I0221 09:04:13.663538 1 cache.go:39] Caches are synced for AvailableConditionController controller I0221 09:04:13.676956 1 cache.go:39] Caches are synced for autoregister controller I0221 09:04:13.680534 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0221 09:04:14.562572 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0221 09:04:14.566706 1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000 I0221 09:04:14.568768 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0221 09:04:14.570005 1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000 I0221 09:04:14.570021 1 storage_scheduling.go:109] all system priority classes are created successfully or already exist. I0221 09:04:15.024009 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0221 09:04:15.062781 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0221 09:04:15.135946 1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1] W0221 09:04:15.142919 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.67.2] I0221 09:04:15.144226 1 controller.go:611] quota admission added evaluator for: endpoints I0221 09:04:15.147956 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io I0221 09:04:15.718373 1 controller.go:611] quota admission added evaluator for: serviceaccounts I0221 09:04:16.867062 1 controller.go:611] quota admission added evaluator for: deployments.apps I0221 09:04:16.877719 1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10] I0221 09:04:16.902701 1 controller.go:611] quota admission added evaluator for: daemonsets.apps I0221 09:04:17.127153 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io I0221 09:04:29.273786 1 controller.go:611] quota admission added evaluator for: replicasets.apps I0221 09:04:29.373045 1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps I0221 09:04:30.105604 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io I0221 09:08:42.077781 1 alloc.go:329] "allocated clusterIPs" service="default/netcat" clusterIPs=map[IPv4:10.107.145.137] * * ==> kube-controller-manager [d092f7171bc6] <== * I0221 09:04:28.669670 1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: W0221 09:04:28.669737 1 node_lifecycle_controller.go:1012] Missing timestamp for Node bridge-20220221084933-6550. Assuming now as a timestamp. I0221 09:04:28.669778 1 node_lifecycle_controller.go:1213] Controller detected that zone is now in state Normal. I0221 09:04:28.669879 1 taint_manager.go:187] "Starting NoExecuteTaintManager" I0221 09:04:28.670174 1 event.go:294] "Event occurred" object="bridge-20220221084933-6550" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node bridge-20220221084933-6550 event: Registered Node bridge-20220221084933-6550 in Controller" I0221 09:04:28.670716 1 shared_informer.go:247] Caches are synced for ephemeral I0221 09:04:28.673581 1 shared_informer.go:247] Caches are synced for daemon sets I0221 09:04:28.715587 1 shared_informer.go:247] Caches are synced for disruption I0221 09:04:28.715612 1 disruption.go:371] Sending events to api server. I0221 09:04:28.715716 1 shared_informer.go:247] Caches are synced for stateful set I0221 09:04:28.718021 1 shared_informer.go:247] Caches are synced for namespace I0221 09:04:28.720447 1 shared_informer.go:247] Caches are synced for service account I0221 09:04:28.773772 1 shared_informer.go:247] Caches are synced for resource quota I0221 09:04:28.781129 1 shared_informer.go:247] Caches are synced for resource quota I0221 09:04:29.198204 1 shared_informer.go:247] Caches are synced for garbage collector I0221 09:04:29.214958 1 shared_informer.go:247] Caches are synced for garbage collector I0221 09:04:29.214983 1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0221 09:04:29.275759 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2" I0221 09:04:29.379313 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-pzvfl" I0221 09:04:29.578194 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-tl8l4" I0221 09:04:29.581729 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-7jshp" I0221 09:04:29.722333 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1" I0221 09:04:29.727479 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-tl8l4" I0221 09:08:42.096931 1 event.go:294] "Event occurred" object="default/netcat" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set netcat-668db85669 to 1" I0221 09:08:42.103134 1 event.go:294] "Event occurred" object="default/netcat-668db85669" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: netcat-668db85669-f2pzb" * * ==> kube-proxy [cd31aa9c0c74] <== * I0221 09:04:30.030088 1 node.go:163] Successfully retrieved node IP: 192.168.67.2 I0221 09:04:30.030160 1 server_others.go:138] "Detected node IP" address="192.168.67.2" I0221 09:04:30.030199 1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode="" I0221 09:04:30.053878 1 server_others.go:206] "Using iptables Proxier" I0221 09:04:30.053920 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4 I0221 09:04:30.053930 1 server_others.go:214] "Creating dualStackProxier for iptables" I0221 09:04:30.053961 1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6" I0221 09:04:30.054398 1 server.go:656] "Version info" version="v1.23.4" I0221 09:04:30.055053 1 config.go:317] "Starting service config controller" I0221 09:04:30.055088 1 shared_informer.go:240] Waiting for caches to sync for service config I0221 09:04:30.102436 1 config.go:226] "Starting endpoint slice config controller" I0221 09:04:30.102491 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I0221 09:04:30.155932 1 shared_informer.go:247] Caches are synced for service config I0221 09:04:30.203218 1 shared_informer.go:247] Caches are synced for endpoint slice config * * ==> kube-scheduler [6e69145b30ad] <== * W0221 09:04:13.645199 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0221 09:04:13.645279 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0221 09:04:13.645299 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0221 09:04:13.645306 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0221 09:04:13.645541 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0221 09:04:13.645590 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0221 09:04:13.645643 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0221 09:04:13.645673 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0221 09:04:13.646196 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0221 09:04:13.646281 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0221 09:04:14.478595 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0221 09:04:14.478636 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0221 09:04:14.628486 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0221 09:04:14.628528 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0221 09:04:14.685893 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0221 09:04:14.685922 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0221 09:04:14.756800 1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0221 09:04:14.756941 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0221 09:04:14.766918 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0221 09:04:14.766966 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0221 09:04:14.776948 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0221 09:04:14.777101 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0221 09:04:14.781410 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0221 09:04:14.781648 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope I0221 09:04:16.729661 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Mon 2022-02-21 09:04:01 UTC, end at Mon 2022-02-21 09:13:35 UTC. -- Feb 21 09:10:55 bridge-20220221084933-6550 kubelet[1944]: I0221 09:10:55.409689 1944 scope.go:110] "RemoveContainer" containerID="293c64d3f2e2a9f95c0d0cefe6c39000de6c3e12dcd13952722676d0fdea03ae" Feb 21 09:10:55 bridge-20220221084933-6550 kubelet[1944]: E0221 09:10:55.409969 1944 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0)\"" pod="kube-system/storage-provisioner" podUID=2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0 Feb 21 09:11:07 bridge-20220221084933-6550 kubelet[1944]: I0221 09:11:07.409264 1944 scope.go:110] "RemoveContainer" containerID="293c64d3f2e2a9f95c0d0cefe6c39000de6c3e12dcd13952722676d0fdea03ae" Feb 21 09:11:07 bridge-20220221084933-6550 kubelet[1944]: E0221 09:11:07.409562 1944 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0)\"" pod="kube-system/storage-provisioner" podUID=2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0 Feb 21 09:11:20 bridge-20220221084933-6550 kubelet[1944]: I0221 09:11:20.410060 1944 scope.go:110] "RemoveContainer" containerID="293c64d3f2e2a9f95c0d0cefe6c39000de6c3e12dcd13952722676d0fdea03ae" Feb 21 09:11:20 bridge-20220221084933-6550 kubelet[1944]: E0221 09:11:20.410285 1944 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0)\"" pod="kube-system/storage-provisioner" podUID=2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0 Feb 21 09:11:31 bridge-20220221084933-6550 kubelet[1944]: I0221 09:11:31.410110 1944 scope.go:110] "RemoveContainer" containerID="293c64d3f2e2a9f95c0d0cefe6c39000de6c3e12dcd13952722676d0fdea03ae" Feb 21 09:11:31 bridge-20220221084933-6550 kubelet[1944]: E0221 09:11:31.410324 1944 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0)\"" pod="kube-system/storage-provisioner" podUID=2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0 Feb 21 09:11:43 bridge-20220221084933-6550 kubelet[1944]: I0221 09:11:43.409785 1944 scope.go:110] "RemoveContainer" containerID="293c64d3f2e2a9f95c0d0cefe6c39000de6c3e12dcd13952722676d0fdea03ae" Feb 21 09:11:43 bridge-20220221084933-6550 kubelet[1944]: E0221 09:11:43.410086 1944 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0)\"" pod="kube-system/storage-provisioner" podUID=2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0 Feb 21 09:11:56 bridge-20220221084933-6550 kubelet[1944]: I0221 09:11:56.409250 1944 scope.go:110] "RemoveContainer" containerID="293c64d3f2e2a9f95c0d0cefe6c39000de6c3e12dcd13952722676d0fdea03ae" Feb 21 09:11:56 bridge-20220221084933-6550 kubelet[1944]: E0221 09:11:56.409476 1944 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0)\"" pod="kube-system/storage-provisioner" podUID=2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0 Feb 21 09:12:08 bridge-20220221084933-6550 kubelet[1944]: I0221 09:12:08.409182 1944 scope.go:110] "RemoveContainer" containerID="293c64d3f2e2a9f95c0d0cefe6c39000de6c3e12dcd13952722676d0fdea03ae" Feb 21 09:12:08 bridge-20220221084933-6550 kubelet[1944]: E0221 09:12:08.409469 1944 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0)\"" pod="kube-system/storage-provisioner" podUID=2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0 Feb 21 09:12:23 bridge-20220221084933-6550 kubelet[1944]: I0221 09:12:23.409337 1944 scope.go:110] "RemoveContainer" containerID="293c64d3f2e2a9f95c0d0cefe6c39000de6c3e12dcd13952722676d0fdea03ae" Feb 21 09:12:23 bridge-20220221084933-6550 kubelet[1944]: E0221 09:12:23.409560 1944 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0)\"" pod="kube-system/storage-provisioner" podUID=2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0 Feb 21 09:12:37 bridge-20220221084933-6550 kubelet[1944]: I0221 09:12:37.409441 1944 scope.go:110] "RemoveContainer" containerID="293c64d3f2e2a9f95c0d0cefe6c39000de6c3e12dcd13952722676d0fdea03ae" Feb 21 09:12:37 bridge-20220221084933-6550 kubelet[1944]: E0221 09:12:37.409740 1944 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0)\"" pod="kube-system/storage-provisioner" podUID=2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0 Feb 21 09:12:51 bridge-20220221084933-6550 kubelet[1944]: I0221 09:12:51.410017 1944 scope.go:110] "RemoveContainer" containerID="293c64d3f2e2a9f95c0d0cefe6c39000de6c3e12dcd13952722676d0fdea03ae" Feb 21 09:12:51 bridge-20220221084933-6550 kubelet[1944]: E0221 09:12:51.410235 1944 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0)\"" pod="kube-system/storage-provisioner" podUID=2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0 Feb 21 09:13:04 bridge-20220221084933-6550 kubelet[1944]: I0221 09:13:04.409239 1944 scope.go:110] "RemoveContainer" containerID="293c64d3f2e2a9f95c0d0cefe6c39000de6c3e12dcd13952722676d0fdea03ae" Feb 21 09:13:04 bridge-20220221084933-6550 kubelet[1944]: E0221 09:13:04.409475 1944 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0)\"" pod="kube-system/storage-provisioner" podUID=2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0 Feb 21 09:13:18 bridge-20220221084933-6550 kubelet[1944]: I0221 09:13:18.412224 1944 scope.go:110] "RemoveContainer" containerID="293c64d3f2e2a9f95c0d0cefe6c39000de6c3e12dcd13952722676d0fdea03ae" Feb 21 09:13:18 bridge-20220221084933-6550 kubelet[1944]: E0221 09:13:18.412491 1944 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0)\"" pod="kube-system/storage-provisioner" podUID=2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0 Feb 21 09:13:31 bridge-20220221084933-6550 kubelet[1944]: I0221 09:13:31.409896 1944 scope.go:110] "RemoveContainer" containerID="293c64d3f2e2a9f95c0d0cefe6c39000de6c3e12dcd13952722676d0fdea03ae" * * ==> storage-provisioner [293c64d3f2e2] <== * I0221 09:10:08.540866 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... F0221 09:10:38.544484 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout * * ==> storage-provisioner [e990cc7800b7] <== * I0221 09:13:31.532505 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... -- /stdout -- helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p bridge-20220221084933-6550 -n bridge-20220221084933-6550 E0221 09:13:36.308558 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory === CONT TestNetworkPlugins/group/kubenet/DNS net_test.go:163: (dbg) Run: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default === CONT TestNetworkPlugins/group/bridge helpers_test.go:262: (dbg) Run: kubectl --context bridge-20220221084933-6550 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running helpers_test.go:271: non-running pods: helpers_test.go:273: ======> post-mortem[TestNetworkPlugins/group/bridge]: describe non-running pods <====== helpers_test.go:276: (dbg) Run: kubectl --context bridge-20220221084933-6550 describe pod helpers_test.go:276: (dbg) Non-zero exit: kubectl --context bridge-20220221084933-6550 describe pod : exit status 1 (43.408452ms) ** stderr ** error: resource name may not be empty ** /stderr ** helpers_test.go:278: kubectl --context bridge-20220221084933-6550 describe pod : exit status 1 helpers_test.go:176: Cleaning up "bridge-20220221084933-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p bridge-20220221084933-6550 E0221 09:13:37.589308 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p bridge-20220221084933-6550: (2.64548242s) === CONT TestStartStop/group/no-preload === RUN TestStartStop/group/no-preload/serial === RUN TestStartStop/group/no-preload/serial/FirstStart start_stop_delete_test.go:171: (dbg) Run: out/minikube-linux-amd64 start -p no-preload-20220221091339-6550 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --container-runtime=docker --kubernetes-version=v1.23.5-rc.0 E0221 09:13:40.149613 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory E0221 09:13:42.084608 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory E0221 09:13:42.089864 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory E0221 09:13:42.100177 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory E0221 09:13:42.120449 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory E0221 09:13:42.161253 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory E0221 09:13:42.241533 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory E0221 09:13:42.404962 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory E0221 09:13:42.725440 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory E0221 09:13:43.366371 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory E0221 09:13:44.646534 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory E0221 09:13:45.270147 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory E0221 09:13:47.207107 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory === CONT TestNetworkPlugins/group/kubenet/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.14308808s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 09:13:52.327955 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory net_test.go:163: (dbg) Run: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default E0221 09:13:55.511603 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory E0221 09:14:02.568935 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory E0221 09:14:05.984218 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.147684786s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 09:14:15.992056 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory net_test.go:163: (dbg) Run: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default E0221 09:14:23.049680 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory === CONT TestNetworkPlugins/group/enable-default-cni/DNS net_test.go:163: (dbg) Run: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default E0221 09:14:29.911677 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory === CONT TestNetworkPlugins/group/kubenet/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.177565411s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 09:14:33.149259 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory === CONT TestStartStop/group/no-preload/serial/FirstStart start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20220221091339-6550 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --container-runtime=docker --kubernetes-version=v1.23.5-rc.0: (54.576127685s) === RUN TestStartStop/group/no-preload/serial/DeployApp start_stop_delete_test.go:181: (dbg) Run: kubectl --context no-preload-20220221091339-6550 create -f testdata/busybox.yaml start_stop_delete_test.go:181: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ... helpers_test.go:343: "busybox" [fb79d056-c563-4f84-955d-90a4971e5379] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox]) helpers_test.go:343: "busybox" [fb79d056-c563-4f84-955d-90a4971e5379] Running === CONT TestNetworkPlugins/group/enable-default-cni/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.278098961s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1 net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"* === CONT TestNetworkPlugins/group/enable-default-cni net_test.go:198: "enable-default-cni" test finished in 25m4.867153999s, failed=true net_test.go:199: *** TestNetworkPlugins/group/enable-default-cni FAILED at 2022-02-21 09:14:38.628715991 +0000 UTC m=+2971.391035583 helpers_test.go:223: -----------------------post-mortem-------------------------------- helpers_test.go:231: ======> post-mortem[TestNetworkPlugins/group/enable-default-cni]: docker inspect <====== helpers_test.go:232: (dbg) Run: docker inspect enable-default-cni-20220221084933-6550 helpers_test.go:236: (dbg) docker inspect enable-default-cni-20220221084933-6550: -- stdout -- [ { "Id": "5870309f6f9284c8e963c0ab906cf1ca7e5c5fcaf057bbf1e1417e629a2119a7", "Created": "2022-02-21T09:03:39.327100743Z", "Path": "/usr/local/bin/entrypoint", "Args": [ "/sbin/init" ], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 444720, "ExitCode": 0, "Error": "", "StartedAt": "2022-02-21T09:03:39.776155311Z", "FinishedAt": "0001-01-01T00:00:00Z" }, "Image": "sha256:1312ccd2422d964b2df363d606d0c016d6acbc1ddf0211c26a74717f2897dc43", "ResolvConfPath": "/var/lib/docker/containers/5870309f6f9284c8e963c0ab906cf1ca7e5c5fcaf057bbf1e1417e629a2119a7/resolv.conf", "HostnamePath": "/var/lib/docker/containers/5870309f6f9284c8e963c0ab906cf1ca7e5c5fcaf057bbf1e1417e629a2119a7/hostname", "HostsPath": "/var/lib/docker/containers/5870309f6f9284c8e963c0ab906cf1ca7e5c5fcaf057bbf1e1417e629a2119a7/hosts", "LogPath": "/var/lib/docker/containers/5870309f6f9284c8e963c0ab906cf1ca7e5c5fcaf057bbf1e1417e629a2119a7/5870309f6f9284c8e963c0ab906cf1ca7e5c5fcaf057bbf1e1417e629a2119a7-json.log", "Name": "/enable-default-cni-20220221084933-6550", "RestartCount": 0, "Driver": "overlay2", "Platform": "linux", "MountLabel": "", "ProcessLabel": "", "AppArmorProfile": "unconfined", "ExecIDs": null, "HostConfig": { "Binds": [ "/lib/modules:/lib/modules:ro", "enable-default-cni-20220221084933-6550:/var" ], "ContainerIDFile": "", "LogConfig": { "Type": "json-file", "Config": {} }, "NetworkMode": "enable-default-cni-20220221084933-6550", "PortBindings": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ] }, "RestartPolicy": { "Name": "no", "MaximumRetryCount": 0 }, "AutoRemove": false, "VolumeDriver": "", "VolumesFrom": null, "CapAdd": null, "CapDrop": null, "CgroupnsMode": "host", "Dns": [], "DnsOptions": [], "DnsSearch": [], "ExtraHosts": null, "GroupAdd": null, "IpcMode": "private", "Cgroup": "", "Links": null, "OomScoreAdj": 0, "PidMode": "", "Privileged": true, "PublishAllPorts": false, "ReadonlyRootfs": false, "SecurityOpt": [ "seccomp=unconfined", "apparmor=unconfined", "label=disable" ], "Tmpfs": { "/run": "", "/tmp": "" }, "UTSMode": "", "UsernsMode": "", "ShmSize": 67108864, "Runtime": "runc", "ConsoleSize": [ 0, 0 ], "Isolation": "", "CpuShares": 0, "Memory": 0, "NanoCpus": 2000000000, "CgroupParent": "", "BlkioWeight": 0, "BlkioWeightDevice": [], "BlkioDeviceReadBps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteIOps": null, "CpuPeriod": 0, "CpuQuota": 0, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": [ { "PathOnHost": "/dev/fuse", "PathInContainer": "/dev/fuse", "CgroupPermissions": "rwm" } ], "DeviceCgroupRules": null, "DeviceRequests": null, "KernelMemory": 0, "KernelMemoryTCP": 0, "MemoryReservation": 0, "MemorySwap": 0, "MemorySwappiness": null, "OomKillDisable": false, "PidsLimit": null, "Ulimits": null, "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0, "MaskedPaths": null, "ReadonlyPaths": null }, "GraphDriver": { "Data": { "LowerDir": "/var/lib/docker/overlay2/cf93df48f7bab1864de803bed96e0e4a14ae3aa65638d92ed832158480bb2c5c-init/diff:/var/lib/docker/overlay2/a149cc5f42de5038ba3e5f21bd82a6356aa9456aec11a3690e569db72ca8eaf5/diff:/var/lib/docker/overlay2/fce4ef34d92ff253276822af091ec16578b924486e0e565c7f34b479bdbe1606/diff:/var/lib/docker/overlay2/a38e10ed288ad73a8ce637c6219d5b13318bb72210ce4fb6291dd57d238df611/diff:/var/lib/docker/overlay2/93c18ddce49a4eb2d344b0579d7f58df999d53bb329042f78a5b3f40296f83cd/diff:/var/lib/docker/overlay2/d9ee14262b31fff87924cf913d835b02182e6a8dc3783ecb5f67dbd29faa079c/diff:/var/lib/docker/overlay2/8a0b39fc5ebcb9761ad433102244050fab339d3e027a5a30a7e01d121fe362e4/diff:/var/lib/docker/overlay2/7a6b41deef37b6400fac0c94871f73040d4cae157de2e929b7a0bf96f0e353aa/diff:/var/lib/docker/overlay2/49d913ee587718efcf521a65de0109df5c2c42644f75690beed97a437d623b66/diff:/var/lib/docker/overlay2/89e8fa960313dfb6ece4007576bf5d89e2b0ea68a2bde9de5527647a7df6ff0f/diff:/var/lib/docker/overlay2/0d2632e6ee2f4198f3c7c0d37c31ee03d33864e0131ae85dd3001adecfae53db/diff:/var/lib/docker/overlay2/b28a3b2bbe2e231d00863aaaf6be28d2e5a76197f2be7934f18720a71f2435a8/diff:/var/lib/docker/overlay2/6f88e83ff5bb26bf909248d8df5eacf4d01f72523927fd7433a925c525fbd274/diff:/var/lib/docker/overlay2/c2547d18d2639fa842b0977952fa69cffd7b076e8db357ef0ec62fae9676619b/diff:/var/lib/docker/overlay2/34465ba37bb8eef87906c38cb891a738c54a16dbf47d27fef95cee96f0b62c4f/diff:/var/lib/docker/overlay2/50d3383518bb8e6a01e1cfcae621b8c2238dc54ae94d63606d145633045bc6cd/diff:/var/lib/docker/overlay2/83561f107aed2bf9cb840ff66aceb882d0e39bcced79ef27ce67c17e063ba536/diff:/var/lib/docker/overlay2/bfb7c0c91e0a7649286abe3a6b02df4585a931ffa3a6b1e4846aa1b98eb0d997/diff:/var/lib/docker/overlay2/80589044da151f3d3eed7824d236bc8a19697182abca8610eb34f1c3cae53bf0/diff:/var/lib/docker/overlay2/5a336e956e467ad50830aa4b9ed5b65cb3ef08e6d49a4900acd0440ca3c03c3a/diff:/var/lib/docker/overlay2/f7b3446c0dca80a2053183e360de8f99e3a31e87ad633fcc23f14b2045643596/diff:/var/lib/docker/overlay2/03688a16272f13f172ed8baa181df2a1702e08edbc9c5a4a767265adb2c00b79/diff:/var/lib/docker/overlay2/c5445afaabab58221dbffedde261b3f9948116ccba1344b6f3dd4fdfaf5c98cd/diff:/var/lib/docker/overlay2/704464fd20f23f93d0ad3c3c58103f01fc8a0d76d70bc191f718273e7412bb42/diff:/var/lib/docker/overlay2/a344f9ca1bf1b674ebf560c2a0498913ed90c836e3b4f5b1968022f7f7e54055/diff:/var/lib/docker/overlay2/929f053e6a327a7203e8d8c281f827d02ad922fdabbdfcc5453003daba88cdf4/diff:/var/lib/docker/overlay2/d6a160f0ffe94e5abf99d928a638f6a6220b24c7ba643a438c7557146d240f70/diff:/var/lib/docker/overlay2/c6b58e187a2df10fd55e19e6ad38c20f5ebf5e62617a59b6ce4afe0fc712c0f6/diff:/var/lib/docker/overlay2/fef43deeff878af42cc1f31a1f265b66e642aa166cec02eba5a0c82fb6d70872/diff:/var/lib/docker/overlay2/0d7bbf19182056a05efca89485df54da5fba9903000517fc9e7c2072aa7ba62e/diff:/var/lib/docker/overlay2/5840748964e9ce3890a2fa4fc2e3fb0635426f944b42b3128808a7aca816bf48/diff:/var/lib/docker/overlay2/a94126962e9f346960e3f570b96e0a6e987d255fcba30bf3fd76b227e66275c6/diff:/var/lib/docker/overlay2/211ba88e1557a352838b4896dff8efac70330c1424cff237613c4186994ec037/diff:/var/lib/docker/overlay2/c04720d23411d6693832990e2c79bc529ade609401eaeee8d159f5a6d74d986e/diff:/var/lib/docker/overlay2/70bfe328c89d3a3902a4ba591874fe936425b208ec8e39c2e25d60f21b212300/diff:/var/lib/docker/overlay2/7dcce8bec6dd9561387116c9f724b9101e94e15c0e19c7880ce2b4e96b2921a6/diff:/var/lib/docker/overlay2/81463813805e91be42d9f71b5c63823a80abaaa42d8638034eaddcd66c148310/diff:/var/lib/docker/overlay2/c7dbb1344cf7167b07d58d49902e0db9335bc92a71a324c15dadc14c0aa67c45/diff:/var/lib/docker/overlay2/909df0d6b92b92bcf6e376f0e33d120c53cd24ad0b997d31b5a84f3d1ab89769/diff:/var/lib/docker/overlay2/eacfeaee6ed4aa14e847adad2c6d03439c53944654cc2779cc5c80d239ab323c/diff:/var/lib/docker/overlay2/48c58492169f9043495b11b47439f4dcdeb168525e5bb3a8c130c1ddd5ec882d/diff:/var/lib/docker/overlay2/4520a69f91e766b7694473c2107cc0644394d65da900259eb5933779accdc86d/diff:/var/lib/docker/overlay2/68aafdd300c4b4ebf94976a352959e75fa170653a69e13da38650a447cd5cb2f/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394cfc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff", "MergedDir": "/var/lib/docker/overlay2/cf93df48f7bab1864de803bed96e0e4a14ae3aa65638d92ed832158480bb2c5c/merged", "UpperDir": "/var/lib/docker/overlay2/cf93df48f7bab1864de803bed96e0e4a14ae3aa65638d92ed832158480bb2c5c/diff", "WorkDir": "/var/lib/docker/overlay2/cf93df48f7bab1864de803bed96e0e4a14ae3aa65638d92ed832158480bb2c5c/work" }, "Name": "overlay2" }, "Mounts": [ { "Type": "bind", "Source": "/lib/modules", "Destination": "/lib/modules", "Mode": "ro", "RW": false, "Propagation": "rprivate" }, { "Type": "volume", "Name": "enable-default-cni-20220221084933-6550", "Source": "/var/lib/docker/volumes/enable-default-cni-20220221084933-6550/_data", "Destination": "/var", "Driver": "local", "Mode": "z", "RW": true, "Propagation": "" } ], "Config": { "Hostname": "enable-default-cni-20220221084933-6550", "Domainname": "", "User": "root", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "ExposedPorts": { "22/tcp": {}, "2376/tcp": {}, "32443/tcp": {}, "5000/tcp": {}, "8443/tcp": {} }, "Tty": true, "OpenStdin": false, "StdinOnce": false, "Env": [ "container=docker", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" ], "Cmd": null, "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2", "Volumes": null, "WorkingDir": "", "Entrypoint": [ "/usr/local/bin/entrypoint", "/sbin/init" ], "OnBuild": null, "Labels": { "created_by.minikube.sigs.k8s.io": "true", "mode.minikube.sigs.k8s.io": "enable-default-cni-20220221084933-6550", "name.minikube.sigs.k8s.io": "enable-default-cni-20220221084933-6550", "role.minikube.sigs.k8s.io": "" }, "StopSignal": "SIGRTMIN+3" }, "NetworkSettings": { "Bridge": "", "SandboxID": "62113ff6c1601877b00e3b9a107b91c292f3345ac201f7c3f1e01039af08dc28", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49389" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49388" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49385" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49387" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49386" } ] }, "SandboxKey": "/var/run/docker/netns/62113ff6c160", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null, "EndpointID": "", "Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "MacAddress": "", "Networks": { "enable-default-cni-20220221084933-6550": { "IPAMConfig": { "IPv4Address": "192.168.58.2" }, "Links": null, "Aliases": [ "5870309f6f92", "enable-default-cni-20220221084933-6550" ], "NetworkID": "3436ceea501355dda724417d7ee94ad045ea978227c60239b598f71c466f16a5", "EndpointID": "b305f3dcbfb8ac283a12706e619e07887bffe5d726304e09b99b47f88c19e0ea", "Gateway": "192.168.58.1", "IPAddress": "192.168.58.2", "IPPrefixLen": 24, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:c0:a8:3a:02", "DriverOpts": null } } } } ] -- /stdout -- helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p enable-default-cni-20220221084933-6550 -n enable-default-cni-20220221084933-6550 helpers_test.go:245: <<< TestNetworkPlugins/group/enable-default-cni FAILED: start of post-mortem logs <<< helpers_test.go:246: ======> post-mortem[TestNetworkPlugins/group/enable-default-cni]: minikube logs <====== helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 -p enable-default-cni-20220221084933-6550 logs -n 25 helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p enable-default-cni-20220221084933-6550 logs -n 25: (1.202371297s) helpers_test.go:253: TestNetworkPlugins/group/enable-default-cni logs: -- stdout -- * * ==> Audit <== * |---------|---------------------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|---------------------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| | -p | false-20220221084934-6550 logs | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:02:41 UTC | Mon, 21 Feb 2022 09:02:42 UTC | | | -n 25 | | | | | | | delete | -p false-20220221084934-6550 | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:02:43 UTC | Mon, 21 Feb 2022 09:02:46 UTC | | -p | custom-weave-20220221084934-6550 | custom-weave-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:27 UTC | Mon, 21 Feb 2022 09:03:28 UTC | | | logs -n 25 | | | | | | | delete | -p | custom-weave-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:29 UTC | Mon, 21 Feb 2022 09:03:32 UTC | | | custom-weave-20220221084934-6550 | | | | | | | start | -p kindnet-20220221084934-6550 | kindnet-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:02:46 UTC | Mon, 21 Feb 2022 09:03:35 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=kindnet --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p kindnet-20220221084934-6550 | kindnet-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:40 UTC | Mon, 21 Feb 2022 09:03:40 UTC | | | pgrep -a kubelet | | | | | | | -p | calico-20220221084934-6550 | calico-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:45 UTC | Mon, 21 Feb 2022 09:03:47 UTC | | | logs -n 25 | | | | | | | delete | -p calico-20220221084934-6550 | calico-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:48 UTC | Mon, 21 Feb 2022 09:03:50 UTC | | -p | auto-20220221084933-6550 logs | auto-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:07:20 UTC | Mon, 21 Feb 2022 09:07:22 UTC | | | -n 25 | | | | | | | delete | -p auto-20220221084933-6550 | auto-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:07:22 UTC | Mon, 21 Feb 2022 09:07:25 UTC | | start | -p | enable-default-cni-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:32 UTC | Mon, 21 Feb 2022 09:08:26 UTC | | | enable-default-cni-20220221084933-6550 | | | | | | | | --memory=2048 --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --enable-default-cni=true | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p | enable-default-cni-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:08:26 UTC | Mon, 21 Feb 2022 09:08:27 UTC | | | enable-default-cni-20220221084933-6550 | | | | | | | | pgrep -a kubelet | | | | | | | start | -p bridge-20220221084933-6550 | bridge-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:51 UTC | Mon, 21 Feb 2022 09:08:41 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=bridge --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p bridge-20220221084933-6550 | bridge-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:08:41 UTC | Mon, 21 Feb 2022 09:08:41 UTC | | | pgrep -a kubelet | | | | | | | -p | kindnet-20220221084934-6550 | kindnet-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:09:44 UTC | Mon, 21 Feb 2022 09:09:45 UTC | | | logs -n 25 | | | | | | | delete | -p kindnet-20220221084934-6550 | kindnet-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:09:45 UTC | Mon, 21 Feb 2022 09:09:48 UTC | | start | -p | old-k8s-version-20220221090948-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:09:48 UTC | Mon, 21 Feb 2022 09:11:57 UTC | | | old-k8s-version-20220221090948-6550 | | | | | | | | --memory=2200 --alsologtostderr | | | | | | | | --wait=true --kvm-network=default | | | | | | | | --kvm-qemu-uri=qemu:///system | | | | | | | | --disable-driver-mounts | | | | | | | | --keep-context=false | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | | --kubernetes-version=v1.16.0 | | | | | | | addons | enable metrics-server -p | old-k8s-version-20220221090948-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:12:06 UTC | Mon, 21 Feb 2022 09:12:06 UTC | | | old-k8s-version-20220221090948-6550 | | | | | | | | --images=MetricsServer=k8s.gcr.io/echoserver:1.4 | | | | | | | | --registries=MetricsServer=fake.domain | | | | | | | start | -p kubenet-20220221084933-6550 | kubenet-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:07:25 UTC | Mon, 21 Feb 2022 09:12:15 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --network-plugin=kubenet | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p kubenet-20220221084933-6550 | kubenet-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:12:15 UTC | Mon, 21 Feb 2022 09:12:15 UTC | | | pgrep -a kubelet | | | | | | | stop | -p | old-k8s-version-20220221090948-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:12:06 UTC | Mon, 21 Feb 2022 09:12:17 UTC | | | old-k8s-version-20220221090948-6550 | | | | | | | | --alsologtostderr -v=3 | | | | | | | addons | enable dashboard -p | old-k8s-version-20220221090948-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:12:17 UTC | Mon, 21 Feb 2022 09:12:17 UTC | | | old-k8s-version-20220221090948-6550 | | | | | | | | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 | | | | | | | -p | bridge-20220221084933-6550 | bridge-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:13:35 UTC | Mon, 21 Feb 2022 09:13:36 UTC | | | logs -n 25 | | | | | | | delete | -p bridge-20220221084933-6550 | bridge-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:13:36 UTC | Mon, 21 Feb 2022 09:13:39 UTC | | start | -p no-preload-20220221091339-6550 | no-preload-20220221091339-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:13:39 UTC | Mon, 21 Feb 2022 09:14:33 UTC | | | --memory=2200 --alsologtostderr | | | | | | | | --wait=true --preload=false | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | | --kubernetes-version=v1.23.5-rc.0 | | | | | | |---------|---------------------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| * * ==> Last Start <== * Log file created at: 2022/02/21 09:13:39 Running on machine: ubuntu-20-agent-5 Binary: Built with gc go1.17.7 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0221 09:13:39.344379 488103 out.go:297] Setting OutFile to fd 1 ... I0221 09:13:39.344478 488103 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 09:13:39.344488 488103 out.go:310] Setting ErrFile to fd 2... I0221 09:13:39.344492 488103 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 09:13:39.344595 488103 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/bin I0221 09:13:39.344892 488103 out.go:304] Setting JSON to false I0221 09:13:39.346789 488103 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":3374,"bootTime":1645431446,"procs":677,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"} I0221 09:13:39.346878 488103 start.go:122] virtualization: kvm guest I0221 09:13:39.349398 488103 out.go:176] * [no-preload-20220221091339-6550] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64) I0221 09:13:39.350741 488103 out.go:176] - MINIKUBE_LOCATION=13641 I0221 09:13:39.349556 488103 notify.go:193] Checking for updates... I0221 09:13:39.352383 488103 out.go:176] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true I0221 09:13:39.353788 488103 out.go:176] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 09:13:39.355117 488103 out.go:176] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube I0221 09:13:39.356401 488103 out.go:176] - MINIKUBE_BIN=out/minikube-linux-amd64 I0221 09:13:39.356948 488103 config.go:176] Loaded profile config "enable-default-cni-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:13:39.357057 488103 config.go:176] Loaded profile config "kubenet-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:13:39.357179 488103 config.go:176] Loaded profile config "old-k8s-version-20220221090948-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0 I0221 09:13:39.357233 488103 driver.go:344] Setting default libvirt URI to qemu:///system I0221 09:13:39.403528 488103 docker.go:132] docker version: linux-20.10.12 I0221 09:13:39.403617 488103 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 09:13:39.496690 488103 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-02-21 09:13:39.434778262 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 09:13:39.496825 488103 docker.go:237] overlay module found I0221 09:13:39.499606 488103 out.go:176] * Using the docker driver based on user configuration I0221 09:13:39.499632 488103 start.go:281] selected driver: docker I0221 09:13:39.499637 488103 start.go:798] validating driver "docker" against I0221 09:13:39.499657 488103 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} W0221 09:13:39.499712 488103 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. W0221 09:13:39.499733 488103 out.go:241] ! Your cgroup does not allow setting memory. I0221 09:13:39.501118 488103 out.go:176] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities I0221 09:13:39.501718 488103 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 09:13:39.594032 488103 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-02-21 09:13:39.532202908 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 09:13:39.594178 488103 start_flags.go:288] no existing cluster config was found, will generate one from the flags I0221 09:13:39.594318 488103 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m I0221 09:13:39.594342 488103 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] I0221 09:13:39.594358 488103 cni.go:93] Creating CNI manager for "" I0221 09:13:39.594366 488103 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0221 09:13:39.594374 488103 start_flags.go:302] config: {Name:no-preload-20220221091339-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5-rc.0 ClusterName:no-preload-20220221091339-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 09:13:39.596655 488103 out.go:176] * Starting control plane node no-preload-20220221091339-6550 in cluster no-preload-20220221091339-6550 I0221 09:13:39.596693 488103 cache.go:120] Beginning downloading kic base image for docker with docker I0221 09:13:39.598166 488103 out.go:176] * Pulling base image ... I0221 09:13:39.598214 488103 preload.go:132] Checking if preload exists for k8s version v1.23.5-rc.0 and runtime docker I0221 09:13:39.598325 488103 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon I0221 09:13:39.598355 488103 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/config.json ... I0221 09:13:39.598395 488103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/config.json: {Name:mka1935bea8c99f28dd349264d0742b49f686366 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:13:39.598520 488103 cache.go:107] acquiring lock: {Name:mkae39637d54454769ea96c0928557495a2624a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:13:39.598520 488103 cache.go:107] acquiring lock: {Name:mkf4838fe0f0754a09f1960b33e83e9fd73716a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:13:39.598567 488103 cache.go:107] acquiring lock: {Name:mkc848fd9c1e80ffd1414dd8603c19c641b3fcb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:13:39.598659 488103 cache.go:107] acquiring lock: {Name:mk048af2cde148e8a512f7653817cea4bb1a47e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:13:39.598647 488103 cache.go:107] acquiring lock: {Name:mkd0cd2ae3afc8e39e716bbcd5f1e196bdbc0e1b Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:13:39.598667 488103 cache.go:107] acquiring lock: {Name:mk8eae83c87e69d4f61d57feebab23b9c618f6ed Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:13:39.598666 488103 cache.go:107] acquiring lock: {Name:mk4db3a52d1f4fba9dc9223f3164cb8742f00f2f Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:13:39.598675 488103 cache.go:107] acquiring lock: {Name:mk8cb7540d8a1bd7faccdcc974630f93843749a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:13:39.598703 488103 cache.go:107] acquiring lock: {Name:mk9f52e4209628388c7268565716f70b6a94e740 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:13:39.598735 488103 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1 exists I0221 09:13:39.598760 488103 cache.go:96] cache image "docker.io/kubernetesui/dashboard:v2.3.1" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1" took 117.095µs I0221 09:13:39.598776 488103 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 exists I0221 09:13:39.598778 488103 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7 exists I0221 09:13:39.598776 488103 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.5-rc.0 exists I0221 09:13:39.598777 488103 cache.go:107] acquiring lock: {Name:mk0340c3f1bf4216c7deeea4078501a3da4b3533 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:13:39.598797 488103 cache.go:96] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.7" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7" took 288.386µs I0221 09:13:39.598781 488103 cache.go:80] save to tar file docker.io/kubernetesui/dashboard:v2.3.1 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1 succeeded I0221 09:13:39.598814 488103 cache.go:80] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.7 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7 succeeded I0221 09:13:39.598735 488103 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.5-rc.0 exists I0221 09:13:39.598822 488103 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists I0221 09:13:39.598825 488103 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.5-rc.0 exists I0221 09:13:39.598829 488103 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.23.5-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.5-rc.0" took 174.192µs I0221 09:13:39.598825 488103 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 exists I0221 09:13:39.598842 488103 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.23.5-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.5-rc.0 succeeded I0221 09:13:39.598838 488103 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 222.709µs I0221 09:13:39.598855 488103 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.5-rc.0 exists I0221 09:13:39.598856 488103 cache.go:96] cache image "k8s.gcr.io/pause:3.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6" took 291.323µs I0221 09:13:39.598864 488103 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 exists I0221 09:13:39.598868 488103 cache.go:80] save to tar file k8s.gcr.io/pause:3.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 succeeded I0221 09:13:39.598874 488103 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.23.5-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.5-rc.0" took 100.458µs I0221 09:13:39.598891 488103 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.23.5-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.5-rc.0 succeeded I0221 09:13:39.598796 488103 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6" took 131.724µs I0221 09:13:39.598901 488103 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 succeeded I0221 09:13:39.598877 488103 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.1-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0" took 345.743µs I0221 09:13:39.598908 488103 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.1-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 succeeded I0221 09:13:39.598857 488103 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded I0221 09:13:39.598801 488103 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.23.5-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.5-rc.0" took 100.527µs I0221 09:13:39.598922 488103 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.23.5-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.5-rc.0 succeeded I0221 09:13:39.598841 488103 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.23.5-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.5-rc.0" took 337.448µs I0221 09:13:39.598944 488103 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.23.5-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.5-rc.0 succeeded I0221 09:13:39.598955 488103 cache.go:87] Successfully saved all images to host disk. I0221 09:13:39.644932 488103 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon, skipping pull I0221 09:13:39.644975 488103 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 exists in daemon, skipping load I0221 09:13:39.644992 488103 cache.go:208] Successfully downloaded all kic artifacts I0221 09:13:39.645040 488103 start.go:313] acquiring machines lock for no-preload-20220221091339-6550: {Name:mk3240de6571e839de8f8161d174b6e05c7d8988 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:13:39.645186 488103 start.go:317] acquired machines lock for "no-preload-20220221091339-6550" in 121.461µs I0221 09:13:39.645211 488103 start.go:89] Provisioning new machine with config: &{Name:no-preload-20220221091339-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5-rc.0 ClusterName:no-preload-20220221091339-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.5-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true} I0221 09:13:39.645300 488103 start.go:126] createHost starting for "" (driver="docker") I0221 09:13:38.369177 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:13:40.867726 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:13:39.647694 488103 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ... I0221 09:13:39.647941 488103 start.go:160] libmachine.API.Create for "no-preload-20220221091339-6550" (driver="docker") I0221 09:13:39.647977 488103 client.go:168] LocalClient.Create starting I0221 09:13:39.648053 488103 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem I0221 09:13:39.648090 488103 main.go:130] libmachine: Decoding PEM data... I0221 09:13:39.648111 488103 main.go:130] libmachine: Parsing certificate... I0221 09:13:39.648190 488103 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem I0221 09:13:39.648233 488103 main.go:130] libmachine: Decoding PEM data... I0221 09:13:39.648252 488103 main.go:130] libmachine: Parsing certificate... I0221 09:13:39.648667 488103 cli_runner.go:133] Run: docker network inspect no-preload-20220221091339-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0221 09:13:39.682574 488103 cli_runner.go:180] docker network inspect no-preload-20220221091339-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0221 09:13:39.682642 488103 network_create.go:254] running [docker network inspect no-preload-20220221091339-6550] to gather additional debugging logs... I0221 09:13:39.682665 488103 cli_runner.go:133] Run: docker network inspect no-preload-20220221091339-6550 W0221 09:13:39.718056 488103 cli_runner.go:180] docker network inspect no-preload-20220221091339-6550 returned with exit code 1 I0221 09:13:39.718088 488103 network_create.go:257] error running [docker network inspect no-preload-20220221091339-6550]: docker network inspect no-preload-20220221091339-6550: exit status 1 stdout: [] stderr: Error: No such network: no-preload-20220221091339-6550 I0221 09:13:39.718118 488103 network_create.go:259] output of [docker network inspect no-preload-20220221091339-6550]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: no-preload-20220221091339-6550 ** /stderr ** I0221 09:13:39.718181 488103 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 09:13:39.753279 488103 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-702b27ce9c6c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:28:47:23:7f}} I0221 09:13:39.754138 488103 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-3436ceea5013 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:ca:78:ad:42}} I0221 09:13:39.755228 488103 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc000114198] misses:0} I0221 09:13:39.755270 488103 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}} I0221 09:13:39.755296 488103 network_create.go:106] attempt to create docker network no-preload-20220221091339-6550 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ... I0221 09:13:39.755356 488103 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220221091339-6550 I0221 09:13:39.825551 488103 network_create.go:90] docker network no-preload-20220221091339-6550 192.168.67.0/24 created I0221 09:13:39.825583 488103 kic.go:106] calculated static IP "192.168.67.2" for the "no-preload-20220221091339-6550" container I0221 09:13:39.825652 488103 cli_runner.go:133] Run: docker ps -a --format {{.Names}} I0221 09:13:39.861028 488103 cli_runner.go:133] Run: docker volume create no-preload-20220221091339-6550 --label name.minikube.sigs.k8s.io=no-preload-20220221091339-6550 --label created_by.minikube.sigs.k8s.io=true I0221 09:13:39.896121 488103 oci.go:102] Successfully created a docker volume no-preload-20220221091339-6550 I0221 09:13:39.896221 488103 cli_runner.go:133] Run: docker run --rm --name no-preload-20220221091339-6550-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20220221091339-6550 --entrypoint /usr/bin/test -v no-preload-20220221091339-6550:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -d /var/lib I0221 09:13:40.442915 488103 oci.go:106] Successfully prepared a docker volume no-preload-20220221091339-6550 I0221 09:13:40.442979 488103 preload.go:132] Checking if preload exists for k8s version v1.23.5-rc.0 and runtime docker W0221 09:13:40.443043 488103 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted. W0221 09:13:40.443052 488103 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. I0221 09:13:40.443100 488103 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'" I0221 09:13:40.538914 488103 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-20220221091339-6550 --name no-preload-20220221091339-6550 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20220221091339-6550 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-20220221091339-6550 --network no-preload-20220221091339-6550 --ip 192.168.67.2 --volume no-preload-20220221091339-6550:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 I0221 09:13:40.958501 488103 cli_runner.go:133] Run: docker container inspect no-preload-20220221091339-6550 --format={{.State.Running}} I0221 09:13:40.997225 488103 cli_runner.go:133] Run: docker container inspect no-preload-20220221091339-6550 --format={{.State.Status}} I0221 09:13:41.032772 488103 cli_runner.go:133] Run: docker exec no-preload-20220221091339-6550 stat /var/lib/dpkg/alternatives/iptables I0221 09:13:41.103834 488103 oci.go:281] the created container "no-preload-20220221091339-6550" has a running status. I0221 09:13:41.103871 488103 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/no-preload-20220221091339-6550/id_rsa... I0221 09:13:41.230681 488103 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/no-preload-20220221091339-6550/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0221 09:13:41.322050 488103 cli_runner.go:133] Run: docker container inspect no-preload-20220221091339-6550 --format={{.State.Status}} I0221 09:13:41.360388 488103 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0221 09:13:41.360414 488103 kic_runner.go:114] Args: [docker exec --privileged no-preload-20220221091339-6550 chown docker:docker /home/docker/.ssh/authorized_keys] I0221 09:13:41.453502 488103 cli_runner.go:133] Run: docker container inspect no-preload-20220221091339-6550 --format={{.State.Status}} I0221 09:13:41.497205 488103 machine.go:88] provisioning docker machine ... I0221 09:13:41.497243 488103 ubuntu.go:169] provisioning hostname "no-preload-20220221091339-6550" I0221 09:13:41.497302 488103 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:13:41.537889 488103 main.go:130] libmachine: Using SSH client type: native I0221 09:13:41.538087 488103 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49414 } I0221 09:13:41.538103 488103 main.go:130] libmachine: About to run SSH command: sudo hostname no-preload-20220221091339-6550 && echo "no-preload-20220221091339-6550" | sudo tee /etc/hostname I0221 09:13:41.672020 488103 main.go:130] libmachine: SSH cmd err, output: : no-preload-20220221091339-6550 I0221 09:13:41.672091 488103 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:13:41.706730 488103 main.go:130] libmachine: Using SSH client type: native I0221 09:13:41.706865 488103 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49414 } I0221 09:13:41.706883 488103 main.go:130] libmachine: About to run SSH command: if ! grep -xq '.*\sno-preload-20220221091339-6550' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20220221091339-6550/g' /etc/hosts; else echo '127.0.1.1 no-preload-20220221091339-6550' | sudo tee -a /etc/hosts; fi fi I0221 09:13:41.830905 488103 main.go:130] libmachine: SSH cmd err, output: : I0221 09:13:41.830942 488103 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube} I0221 09:13:41.830958 488103 ubuntu.go:177] setting up certificates I0221 09:13:41.830971 488103 provision.go:83] configureAuth start I0221 09:13:41.831055 488103 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220221091339-6550 I0221 09:13:41.865655 488103 provision.go:138] copyHostCerts I0221 09:13:41.865724 488103 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem, removing ... I0221 09:13:41.865734 488103 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem I0221 09:13:41.865815 488103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem (1082 bytes) I0221 09:13:41.865907 488103 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem, removing ... I0221 09:13:41.865933 488103 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem I0221 09:13:41.865964 488103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem (1123 bytes) I0221 09:13:41.866043 488103 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem, removing ... I0221 09:13:41.866057 488103 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem I0221 09:13:41.866086 488103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem (1675 bytes) I0221 09:13:41.866155 488103 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem org=jenkins.no-preload-20220221091339-6550 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20220221091339-6550] I0221 09:13:42.128981 488103 provision.go:172] copyRemoteCerts I0221 09:13:42.129042 488103 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0221 09:13:42.129079 488103 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:13:42.164031 488103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49414 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/no-preload-20220221091339-6550/id_rsa Username:docker} I0221 09:13:42.250851 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes) I0221 09:13:42.269267 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes) I0221 09:13:42.288702 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I0221 09:13:42.307301 488103 provision.go:86] duration metric: configureAuth took 476.316023ms I0221 09:13:42.307335 488103 ubuntu.go:193] setting minikube options for container-runtime I0221 09:13:42.307536 488103 config.go:176] Loaded profile config "no-preload-20220221091339-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5-rc.0 I0221 09:13:42.307596 488103 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:13:42.343570 488103 main.go:130] libmachine: Using SSH client type: native I0221 09:13:42.343712 488103 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49414 } I0221 09:13:42.343726 488103 main.go:130] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0221 09:13:42.463140 488103 main.go:130] libmachine: SSH cmd err, output: : overlay I0221 09:13:42.463164 488103 ubuntu.go:71] root file system type: overlay I0221 09:13:42.463293 488103 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0221 09:13:42.463344 488103 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:13:42.497372 488103 main.go:130] libmachine: Using SSH client type: native I0221 09:13:42.497513 488103 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49414 } I0221 09:13:42.497574 488103 main.go:130] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0221 09:13:42.627970 488103 main.go:130] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0221 09:13:42.628056 488103 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:13:42.664000 488103 main.go:130] libmachine: Using SSH client type: native I0221 09:13:42.664164 488103 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49414 } I0221 09:13:42.664184 488103 main.go:130] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0221 09:13:43.325731 488103 main.go:130] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2021-12-13 11:43:42.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2022-02-21 09:13:42.619122114 +0000 @@ -1,30 +1,32 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com +BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target -Requires=docker.socket containerd.service +Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always - -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 +Restart=on-failure -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +34,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0221 09:13:43.325769 488103 machine.go:91] provisioned docker machine in 1.828543141s I0221 09:13:43.325779 488103 client.go:171] LocalClient.Create took 3.677794054s I0221 09:13:43.325796 488103 start.go:168] duration metric: libmachine.API.Create for "no-preload-20220221091339-6550" took 3.677856275s I0221 09:13:43.325810 488103 start.go:267] post-start starting for "no-preload-20220221091339-6550" (driver="docker") I0221 09:13:43.325821 488103 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0221 09:13:43.325879 488103 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0221 09:13:43.325916 488103 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:13:43.361077 488103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49414 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/no-preload-20220221091339-6550/id_rsa Username:docker} I0221 09:13:43.450978 488103 ssh_runner.go:195] Run: cat /etc/os-release I0221 09:13:43.453753 488103 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0221 09:13:43.453776 488103 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0221 09:13:43.453783 488103 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0221 09:13:43.453788 488103 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0221 09:13:43.453797 488103 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/addons for local assets ... I0221 09:13:43.453844 488103 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files for local assets ... I0221 09:13:43.453909 488103 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem -> 65502.pem in /etc/ssl/certs I0221 09:13:43.453979 488103 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs I0221 09:13:43.460659 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /etc/ssl/certs/65502.pem (1708 bytes) I0221 09:13:43.478443 488103 start.go:270] post-start completed in 152.616099ms I0221 09:13:43.478780 488103 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220221091339-6550 I0221 09:13:43.513485 488103 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/config.json ... I0221 09:13:43.513709 488103 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0221 09:13:43.513749 488103 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:13:43.547929 488103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49414 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/no-preload-20220221091339-6550/id_rsa Username:docker} I0221 09:13:43.631425 488103 start.go:129] duration metric: createHost completed in 3.986113499s I0221 09:13:43.631458 488103 start.go:80] releasing machines lock for "no-preload-20220221091339-6550", held for 3.986260089s I0221 09:13:43.631557 488103 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220221091339-6550 I0221 09:13:43.666430 488103 ssh_runner.go:195] Run: systemctl --version I0221 09:13:43.666485 488103 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/ I0221 09:13:43.666549 488103 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:13:43.666486 488103 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:13:43.704263 488103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49414 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/no-preload-20220221091339-6550/id_rsa Username:docker} I0221 09:13:43.704451 488103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49414 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/no-preload-20220221091339-6550/id_rsa Username:docker} I0221 09:13:43.932517 488103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0221 09:13:43.941933 488103 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 09:13:43.951210 488103 cruntime.go:272] skipping containerd shutdown because we are bound to it I0221 09:13:43.951273 488103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0221 09:13:43.960457 488103 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0221 09:13:43.973513 488103 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0221 09:13:44.055313 488103 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0221 09:13:44.132862 488103 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 09:13:44.143117 488103 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0221 09:13:44.218273 488103 ssh_runner.go:195] Run: sudo systemctl start docker I0221 09:13:44.228554 488103 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 09:13:44.272600 488103 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 09:13:44.315467 488103 out.go:203] * Preparing Kubernetes v1.23.5-rc.0 on Docker 20.10.12 ... I0221 09:13:44.315529 488103 cli_runner.go:133] Run: docker network inspect no-preload-20220221091339-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 09:13:44.348846 488103 ssh_runner.go:195] Run: grep 192.168.67.1 host.minikube.internal$ /etc/hosts I0221 09:13:44.352219 488103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 09:13:43.366865 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:13:45.367971 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:13:47.867419 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:13:44.363594 488103 out.go:176] - kubelet.housekeeping-interval=5m I0221 09:13:44.363685 488103 preload.go:132] Checking if preload exists for k8s version v1.23.5-rc.0 and runtime docker I0221 09:13:44.363734 488103 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0221 09:13:44.397396 488103 docker.go:606] Got preloaded images: I0221 09:13:44.397418 488103 docker.go:612] k8s.gcr.io/kube-apiserver:v1.23.5-rc.0 wasn't preloaded I0221 09:13:44.397423 488103 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.23.5-rc.0 k8s.gcr.io/kube-controller-manager:v1.23.5-rc.0 k8s.gcr.io/kube-scheduler:v1.23.5-rc.0 k8s.gcr.io/kube-proxy:v1.23.5-rc.0 k8s.gcr.io/pause:3.6 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5 docker.io/kubernetesui/dashboard:v2.3.1 docker.io/kubernetesui/metrics-scraper:v1.0.7] I0221 09:13:44.398800 488103 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.23.5-rc.0 I0221 09:13:44.398810 488103 image.go:134] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.7 I0221 09:13:44.398799 488103 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.23.5-rc.0 I0221 09:13:44.398877 488103 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.23.5-rc.0 I0221 09:13:44.399066 488103 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.23.5-rc.0 I0221 09:13:44.399227 488103 image.go:134] retrieving image: docker.io/kubernetesui/dashboard:v2.3.1 I0221 09:13:44.399557 488103 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/pause:3.6 I0221 09:13:44.399572 488103 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5 I0221 09:13:44.399557 488103 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/etcd:3.5.1-0 I0221 09:13:44.399605 488103 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/coredns/coredns:v1.8.6 I0221 09:13:44.399686 488103 image.go:180] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.23.5-rc.0: Error response from daemon: reference does not exist I0221 09:13:44.399691 488103 image.go:180] daemon lookup for k8s.gcr.io/kube-apiserver:v1.23.5-rc.0: Error response from daemon: reference does not exist I0221 09:13:44.399699 488103 image.go:180] daemon lookup for k8s.gcr.io/kube-proxy:v1.23.5-rc.0: Error response from daemon: reference does not exist I0221 09:13:44.399696 488103 image.go:180] daemon lookup for docker.io/kubernetesui/metrics-scraper:v1.0.7: Error response from daemon: reference does not exist I0221 09:13:44.399977 488103 image.go:180] daemon lookup for k8s.gcr.io/kube-scheduler:v1.23.5-rc.0: Error response from daemon: reference does not exist I0221 09:13:44.399998 488103 image.go:180] daemon lookup for docker.io/kubernetesui/dashboard:v2.3.1: Error response from daemon: reference does not exist I0221 09:13:44.443356 488103 cache_images.go:116] "k8s.gcr.io/etcd:3.5.1-0" needs transfer: "k8s.gcr.io/etcd:3.5.1-0" does not exist at hash "sha256:25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d" in container runtime I0221 09:13:44.443404 488103 docker.go:287] Removing image: k8s.gcr.io/etcd:3.5.1-0 I0221 09:13:44.443444 488103 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/etcd:3.5.1-0 I0221 09:13:44.443790 488103 cache_images.go:116] "k8s.gcr.io/pause:3.6" needs transfer: "k8s.gcr.io/pause:3.6" does not exist at hash "sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee" in container runtime I0221 09:13:44.443801 488103 cache_images.go:116] "k8s.gcr.io/coredns/coredns:v1.8.6" needs transfer: "k8s.gcr.io/coredns/coredns:v1.8.6" does not exist at hash "sha256:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime I0221 09:13:44.443832 488103 docker.go:287] Removing image: k8s.gcr.io/coredns/coredns:v1.8.6 I0221 09:13:44.443842 488103 docker.go:287] Removing image: k8s.gcr.io/pause:3.6 I0221 09:13:44.443863 488103 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/coredns/coredns:v1.8.6 I0221 09:13:44.443883 488103 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime I0221 09:13:44.443894 488103 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/pause:3.6 I0221 09:13:44.443910 488103 docker.go:287] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5 I0221 09:13:44.443945 488103 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5 I0221 09:13:44.536938 488103 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 I0221 09:13:44.537026 488103 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.1-0 I0221 09:13:44.544457 488103 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 I0221 09:13:44.544517 488103 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 I0221 09:13:44.544550 488103 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5 I0221 09:13:44.544567 488103 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.6 I0221 09:13:44.544469 488103 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 I0221 09:13:44.544632 488103 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6 I0221 09:13:44.544650 488103 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.1-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.1-0: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/etcd_3.5.1-0': No such file or directory I0221 09:13:44.544663 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 --> /var/lib/minikube/images/etcd_3.5.1-0 (112381440 bytes) I0221 09:13:44.604192 488103 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.6: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/pause_3.6': No such file or directory I0221 09:13:44.604233 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 --> /var/lib/minikube/images/pause_3.6 (325632 bytes) I0221 09:13:44.604279 488103 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory I0221 09:13:44.604315 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (15603712 bytes) I0221 09:13:44.604329 488103 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory I0221 09:13:44.604357 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (10569216 bytes) I0221 09:13:44.649059 488103 docker.go:254] Loading image: /var/lib/minikube/images/pause_3.6 I0221 09:13:44.649086 488103 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.6 | docker load" I0221 09:13:44.947516 488103 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 from cache I0221 09:13:44.947561 488103 docker.go:254] Loading image: /var/lib/minikube/images/storage-provisioner_v5 I0221 09:13:44.947576 488103 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load" I0221 09:13:45.466186 488103 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache I0221 09:13:45.466230 488103 docker.go:254] Loading image: /var/lib/minikube/images/coredns_v1.8.6 I0221 09:13:45.466255 488103 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load" I0221 09:13:45.923313 488103 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.23.5-rc.0 I0221 09:13:46.043249 488103 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.23.5-rc.0 I0221 09:13:46.044357 488103 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.23.5-rc.0 I0221 09:13:46.159542 488103 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 from cache I0221 09:13:46.159590 488103 docker.go:254] Loading image: /var/lib/minikube/images/etcd_3.5.1-0 I0221 09:13:46.159612 488103 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.1-0 | docker load" I0221 09:13:46.159665 488103 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.23.5-rc.0" needs transfer: "k8s.gcr.io/kube-apiserver:v1.23.5-rc.0" does not exist at hash "21a6abb196d761b99a1c0080082127daf45c7ea5429bb08972caeefea3131e87" in container runtime I0221 09:13:46.159709 488103 docker.go:287] Removing image: k8s.gcr.io/kube-apiserver:v1.23.5-rc.0 I0221 09:13:46.159742 488103 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.23.5-rc.0" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.23.5-rc.0" does not exist at hash "771d3886391c929e2b3b1722f9e55ef67fa8f48c043395cfca70c5ce56ae0394" in container runtime I0221 09:13:46.159773 488103 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.23.5-rc.0" needs transfer: "k8s.gcr.io/kube-scheduler:v1.23.5-rc.0" does not exist at hash "636768fbf314dcc4d0872d883b2a329d6de08f4742c73243a3552583533b2624" in container runtime I0221 09:13:46.159795 488103 docker.go:287] Removing image: k8s.gcr.io/kube-controller-manager:v1.23.5-rc.0 I0221 09:13:46.159799 488103 docker.go:287] Removing image: k8s.gcr.io/kube-scheduler:v1.23.5-rc.0 I0221 09:13:46.159832 488103 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-scheduler:v1.23.5-rc.0 I0221 09:13:46.159833 488103 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-controller-manager:v1.23.5-rc.0 I0221 09:13:46.159747 488103 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-apiserver:v1.23.5-rc.0 I0221 09:13:46.240008 488103 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.23.5-rc.0 I0221 09:13:46.425062 488103 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} docker.io/kubernetesui/dashboard:v2.3.1 I0221 09:13:46.430339 488103 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} docker.io/kubernetesui/metrics-scraper:v1.0.7 I0221 09:13:49.867661 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:13:52.367194 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:13:49.947328 488103 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.1-0 | docker load": (3.787696555s) I0221 09:13:49.947358 488103 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 from cache I0221 09:13:49.947432 488103 ssh_runner.go:235] Completed: docker rmi k8s.gcr.io/kube-scheduler:v1.23.5-rc.0: (3.787585932s) I0221 09:13:49.947470 488103 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.5-rc.0 I0221 09:13:49.947482 488103 ssh_runner.go:235] Completed: docker rmi k8s.gcr.io/kube-apiserver:v1.23.5-rc.0: (3.787519775s) I0221 09:13:49.947533 488103 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.5-rc.0 I0221 09:13:49.947538 488103 ssh_runner.go:235] Completed: docker rmi k8s.gcr.io/kube-controller-manager:v1.23.5-rc.0: (3.787605164s) I0221 09:13:49.947560 488103 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.5-rc.0 I0221 09:13:49.947585 488103 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.23.5-rc.0 I0221 09:13:49.947595 488103 ssh_runner.go:235] Completed: docker image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.23.5-rc.0: (3.707559534s) I0221 09:13:49.947609 488103 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.23.5-rc.0 I0221 09:13:49.947633 488103 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.23.5-rc.0" needs transfer: "k8s.gcr.io/kube-proxy:v1.23.5-rc.0" does not exist at hash "0c96fa04944904630c8121480edb68b27f40bb389158c4a70db6ef21acf559a2" in container runtime I0221 09:13:49.947665 488103 ssh_runner.go:235] Completed: docker image inspect --format {{.Id}} docker.io/kubernetesui/dashboard:v2.3.1: (3.522572449s) I0221 09:13:49.947697 488103 docker.go:287] Removing image: k8s.gcr.io/kube-proxy:v1.23.5-rc.0 I0221 09:13:49.947702 488103 cache_images.go:116] "docker.io/kubernetesui/dashboard:v2.3.1" needs transfer: "docker.io/kubernetesui/dashboard:v2.3.1" does not exist at hash "e1482a24335a6e76d438ae175f79409004588570d3e5dbb4c8140e025e848570" in container runtime I0221 09:13:49.947706 488103 ssh_runner.go:235] Completed: docker image inspect --format {{.Id}} docker.io/kubernetesui/metrics-scraper:v1.0.7: (3.517339721s) I0221 09:13:49.947636 488103 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.23.5-rc.0 I0221 09:13:49.947730 488103 docker.go:287] Removing image: docker.io/kubernetesui/dashboard:v2.3.1 I0221 09:13:49.947738 488103 cache_images.go:116] "docker.io/kubernetesui/metrics-scraper:v1.0.7" needs transfer: "docker.io/kubernetesui/metrics-scraper:v1.0.7" does not exist at hash "7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9" in container runtime I0221 09:13:49.947762 488103 docker.go:287] Removing image: docker.io/kubernetesui/metrics-scraper:v1.0.7 I0221 09:13:49.947786 488103 ssh_runner.go:195] Run: docker rmi docker.io/kubernetesui/metrics-scraper:v1.0.7 I0221 09:13:49.947762 488103 ssh_runner.go:195] Run: docker rmi docker.io/kubernetesui/dashboard:v2.3.1 I0221 09:13:49.947734 488103 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-proxy:v1.23.5-rc.0 I0221 09:13:49.952449 488103 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.23.5-rc.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.23.5-rc.0: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.23.5-rc.0': No such file or directory I0221 09:13:49.952490 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.5-rc.0 --> /var/lib/minikube/images/kube-scheduler_v1.23.5-rc.0 (15133184 bytes) I0221 09:13:50.022902 488103 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.5-rc.0 I0221 09:13:50.023046 488103 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.23.5-rc.0 I0221 09:13:50.023364 488103 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7 I0221 09:13:50.023457 488103 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/metrics-scraper_v1.0.7 I0221 09:13:50.023768 488103 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.23.5-rc.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.23.5-rc.0: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.23.5-rc.0': No such file or directory I0221 09:13:50.023793 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.5-rc.0 --> /var/lib/minikube/images/kube-controller-manager_v1.23.5-rc.0 (30170624 bytes) I0221 09:13:50.023818 488103 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1 I0221 09:13:50.023897 488103 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/dashboard_v2.3.1 I0221 09:13:50.023898 488103 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.23.5-rc.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.23.5-rc.0: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.23.5-rc.0': No such file or directory I0221 09:13:50.023944 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.5-rc.0 --> /var/lib/minikube/images/kube-apiserver_v1.23.5-rc.0 (32601088 bytes) I0221 09:13:50.032311 488103 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.23.5-rc.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.23.5-rc.0: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.23.5-rc.0': No such file or directory I0221 09:13:50.032343 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.5-rc.0 --> /var/lib/minikube/images/kube-proxy_v1.23.5-rc.0 (39278080 bytes) I0221 09:13:50.036680 488103 ssh_runner.go:352] existence check for /var/lib/minikube/images/metrics-scraper_v1.0.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/metrics-scraper_v1.0.7: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/metrics-scraper_v1.0.7': No such file or directory I0221 09:13:50.036724 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7 --> /var/lib/minikube/images/metrics-scraper_v1.0.7 (15031296 bytes) I0221 09:13:50.037237 488103 ssh_runner.go:352] existence check for /var/lib/minikube/images/dashboard_v2.3.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/dashboard_v2.3.1: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/dashboard_v2.3.1': No such file or directory I0221 09:13:50.037262 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1 --> /var/lib/minikube/images/dashboard_v2.3.1 (66936320 bytes) I0221 09:13:50.102459 488103 docker.go:254] Loading image: /var/lib/minikube/images/kube-scheduler_v1.23.5-rc.0 I0221 09:13:50.102496 488103 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.23.5-rc.0 | docker load" I0221 09:13:51.428506 488103 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.23.5-rc.0 | docker load": (1.325995772s) I0221 09:13:51.428534 488103 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.5-rc.0 from cache I0221 09:13:51.428574 488103 docker.go:254] Loading image: /var/lib/minikube/images/metrics-scraper_v1.0.7 I0221 09:13:51.428607 488103 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/metrics-scraper_v1.0.7 | docker load" I0221 09:13:51.945013 488103 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7 from cache I0221 09:13:51.945061 488103 docker.go:254] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.23.5-rc.0 I0221 09:13:51.945077 488103 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.23.5-rc.0 | docker load" I0221 09:13:53.211163 488103 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.23.5-rc.0 | docker load": (1.266069019s) I0221 09:13:53.211193 488103 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.5-rc.0 from cache I0221 09:13:53.211220 488103 docker.go:254] Loading image: /var/lib/minikube/images/kube-apiserver_v1.23.5-rc.0 I0221 09:13:53.211239 488103 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.23.5-rc.0 | docker load" I0221 09:13:54.367772 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:13:56.867139 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:13:54.537545 488103 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.23.5-rc.0 | docker load": (1.326284586s) I0221 09:13:54.537574 488103 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.5-rc.0 from cache I0221 09:13:54.537609 488103 docker.go:254] Loading image: /var/lib/minikube/images/kube-proxy_v1.23.5-rc.0 I0221 09:13:54.537650 488103 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.23.5-rc.0 | docker load" I0221 09:13:56.339616 488103 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.23.5-rc.0 | docker load": (1.801945857s) I0221 09:13:56.339657 488103 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.5-rc.0 from cache I0221 09:13:56.339683 488103 docker.go:254] Loading image: /var/lib/minikube/images/dashboard_v2.3.1 I0221 09:13:56.339699 488103 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/dashboard_v2.3.1 | docker load" I0221 09:13:59.419397 488103 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/dashboard_v2.3.1 | docker load": (3.079678516s) I0221 09:13:59.419426 488103 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1 from cache I0221 09:13:59.419452 488103 cache_images.go:123] Successfully loaded all cached images I0221 09:13:59.419461 488103 cache_images.go:92] LoadImages completed in 15.022026603s I0221 09:13:59.419520 488103 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0221 09:13:59.514884 488103 cni.go:93] Creating CNI manager for "" I0221 09:13:59.514919 488103 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0221 09:13:59.514931 488103 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0221 09:13:59.514946 488103 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.5-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20220221091339-6550 NodeName:no-preload-20220221091339-6550 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0221 09:13:59.515113 488103 kubeadm.go:162] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.67.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "no-preload-20220221091339-6550" kubeletExtraArgs: node-ip: 192.168.67.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.67.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.23.5-rc.0 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0221 09:13:59.515204 488103 kubeadm.go:936] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.23.5-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=no-preload-20220221091339-6550 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 [Install] config: {KubernetesVersion:v1.23.5-rc.0 ClusterName:no-preload-20220221091339-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0221 09:13:59.515266 488103 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5-rc.0 I0221 09:13:59.523194 488103 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.23.5-rc.0: Process exited with status 2 stdout: stderr: ls: cannot access '/var/lib/minikube/binaries/v1.23.5-rc.0': No such file or directory Initiating transfer... I0221 09:13:59.523273 488103 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.23.5-rc.0 I0221 09:13:59.530874 488103 binary.go:76] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.23.5-rc.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.23.5-rc.0/bin/linux/amd64/kubectl.sha256 I0221 09:13:59.530934 488103 binary.go:76] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.23.5-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.23.5-rc.0/bin/linux/amd64/kubeadm.sha256 I0221 09:13:59.530958 488103 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl I0221 09:13:59.530962 488103 binary.go:76] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.23.5-rc.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.23.5-rc.0/bin/linux/amd64/kubelet.sha256 I0221 09:13:59.531041 488103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0221 09:13:59.531044 488103 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.23.5-rc.0/kubeadm I0221 09:13:59.535091 488103 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.23.5-rc.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.23.5-rc.0/kubeadm: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/binaries/v1.23.5-rc.0/kubeadm': No such file or directory I0221 09:13:59.535122 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/linux/amd64/v1.23.5-rc.0/kubeadm --> /var/lib/minikube/binaries/v1.23.5-rc.0/kubeadm (45211648 bytes) I0221 09:13:59.535208 488103 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/binaries/v1.23.5-rc.0/kubectl': No such file or directory I0221 09:13:59.535224 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/linux/amd64/v1.23.5-rc.0/kubectl --> /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl (46592000 bytes) I0221 09:13:59.544514 488103 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.23.5-rc.0/kubelet I0221 09:13:59.569193 488103 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.23.5-rc.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.23.5-rc.0/kubelet: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/binaries/v1.23.5-rc.0/kubelet': No such file or directory I0221 09:13:59.569244 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/linux/amd64/v1.23.5-rc.0/kubelet --> /var/lib/minikube/binaries/v1.23.5-rc.0/kubelet (124521440 bytes) I0221 09:13:59.918061 488103 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0221 09:13:59.925503 488103 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes) I0221 09:13:59.938813 488103 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes) I0221 09:13:59.952402 488103 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2057 bytes) I0221 09:13:59.965750 488103 ssh_runner.go:195] Run: grep 192.168.67.2 control-plane.minikube.internal$ /etc/hosts I0221 09:13:59.969023 488103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 09:13:59.978856 488103 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550 for IP: 192.168.67.2 I0221 09:13:59.978948 488103 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key I0221 09:13:59.978986 488103 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key I0221 09:13:59.979078 488103 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/client.key I0221 09:13:59.979093 488103 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/client.crt with IP's: [] I0221 09:14:00.260104 488103 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/client.crt ... I0221 09:14:00.260139 488103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/client.crt: {Name:mkb5c776f53657ebf89941d4ae75e7cd4fd1ecf8 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:14:00.260337 488103 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/client.key ... I0221 09:14:00.260352 488103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/client.key: {Name:mk807ccea67a72008f91e196b40cec5e28bc0ec8 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:14:00.260440 488103 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/apiserver.key.c7fa3a9e I0221 09:14:00.260459 488103 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1] I0221 09:14:00.450652 488103 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/apiserver.crt.c7fa3a9e ... I0221 09:14:00.450683 488103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/apiserver.crt.c7fa3a9e: {Name:mkc5ca2d1641ff622ad9bb5e15df0cf696413945 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:14:00.450852 488103 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/apiserver.key.c7fa3a9e ... I0221 09:14:00.450865 488103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/apiserver.key.c7fa3a9e: {Name:mkadc0c64031cb8715bb9eacd0c1e62e0d48b84a Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:14:00.450941 488103 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/apiserver.crt I0221 09:14:00.451020 488103 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/apiserver.key I0221 09:14:00.451088 488103 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/proxy-client.key I0221 09:14:00.451105 488103 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/proxy-client.crt with IP's: [] I0221 09:14:00.557304 488103 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/proxy-client.crt ... I0221 09:14:00.557333 488103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/proxy-client.crt: {Name:mk3c5a592e554d32f2143385c9ad234b8e698ea5 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:14:00.557524 488103 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/proxy-client.key ... I0221 09:14:00.557537 488103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/proxy-client.key: {Name:mk417036b97dde6cbbab80a20c937b065beed3ea Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:14:00.557683 488103 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem (1338 bytes) W0221 09:14:00.557722 488103 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550_empty.pem, impossibly tiny 0 bytes I0221 09:14:00.557733 488103 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem (1679 bytes) I0221 09:14:00.557757 488103 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem (1082 bytes) I0221 09:14:00.557784 488103 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem (1123 bytes) I0221 09:14:00.557808 488103 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem (1675 bytes) I0221 09:14:00.557847 488103 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem (1708 bytes) I0221 09:14:00.558683 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0221 09:14:00.577905 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0221 09:14:00.596684 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0221 09:14:00.615566 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I0221 09:14:00.633972 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0221 09:14:00.653103 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0221 09:14:00.671588 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0221 09:14:00.690166 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0221 09:14:00.709166 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem --> /usr/share/ca-certificates/6550.pem (1338 bytes) I0221 09:14:00.727776 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /usr/share/ca-certificates/65502.pem (1708 bytes) I0221 09:14:00.746130 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0221 09:14:00.764312 488103 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0221 09:14:00.777756 488103 ssh_runner.go:195] Run: openssl version I0221 09:14:00.783035 488103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65502.pem && ln -fs /usr/share/ca-certificates/65502.pem /etc/ssl/certs/65502.pem" I0221 09:14:00.791042 488103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65502.pem I0221 09:14:00.794707 488103 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Feb 21 08:30 /usr/share/ca-certificates/65502.pem I0221 09:14:00.794749 488103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65502.pem I0221 09:14:00.800080 488103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65502.pem /etc/ssl/certs/3ec20f2e.0" I0221 09:14:00.809600 488103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0221 09:14:00.818126 488103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0221 09:14:00.821632 488103 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb 21 08:26 /usr/share/ca-certificates/minikubeCA.pem I0221 09:14:00.821676 488103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0221 09:14:00.827026 488103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0221 09:14:00.835075 488103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6550.pem && ln -fs /usr/share/ca-certificates/6550.pem /etc/ssl/certs/6550.pem" I0221 09:14:00.843086 488103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6550.pem I0221 09:14:00.846473 488103 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Feb 21 08:30 /usr/share/ca-certificates/6550.pem I0221 09:14:00.846524 488103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6550.pem I0221 09:14:00.851694 488103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6550.pem /etc/ssl/certs/51391683.0" I0221 09:14:00.859910 488103 kubeadm.go:391] StartCluster: {Name:no-preload-20220221091339-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5-rc.0 ClusterName:no-preload-20220221091339-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.5-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 09:14:00.860021 488103 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0221 09:14:00.893585 488103 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0221 09:14:00.901588 488103 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0221 09:14:00.909236 488103 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver I0221 09:14:00.909302 488103 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0221 09:14:00.916816 488103 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0221 09:14:00.916858 488103 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0221 09:13:59.367181 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:14:01.867767 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:14:01.429678 488103 out.go:203] - Generating certificates and keys ... I0221 09:14:03.820433 488103 out.go:203] - Booting up control plane ... I0221 09:14:04.367135 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:14:06.904036 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:14:08.974172 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:14:11.366696 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:14:13.366899 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:14:15.867420 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:14:18.361662 488103 out.go:203] - Configuring RBAC rules ... I0221 09:14:18.813515 488103 cni.go:93] Creating CNI manager for "" I0221 09:14:18.813542 488103 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0221 09:14:18.813571 488103 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0221 09:14:18.813719 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:18.813796 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=9a4c39b14752448ee8b5da7f8bb397c2a16d9ea9 minikube.k8s.io/name=no-preload-20220221091339-6550 minikube.k8s.io/updated_at=2022_02_21T09_14_18_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:19.212122 488103 ops.go:34] apiserver oom_adj: -16 I0221 09:14:19.212232 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:18.368282 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:14:20.866820 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:14:22.867214 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:14:19.768205 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:20.267993 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:20.768304 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:21.268220 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:21.767700 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:22.268206 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:22.767962 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:23.268219 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:23.768450 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:24.268208 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:25.366985 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:14:27.868840 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:14:24.768176 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:25.268252 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:25.767611 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:26.267740 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:26.768207 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:27.268241 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:27.768173 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:28.268054 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:28.768008 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:29.267723 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:29.768242 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:30.268495 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:30.326062 488103 kubeadm.go:1020] duration metric: took 11.512405074s to wait for elevateKubeSystemPrivileges. I0221 09:14:30.326096 488103 kubeadm.go:393] StartCluster complete in 29.466192667s I0221 09:14:30.326117 488103 settings.go:142] acquiring lock: {Name:mk4400923ef35d7d80e21aa000bc7683aef0fb09 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:14:30.326239 488103 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 09:14:30.328631 488103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig: {Name:mkbaf7b3f4ffbe8a9d57707d423380523fa909e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:14:30.847109 488103 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220221091339-6550" rescaled to 1 I0221 09:14:30.847168 488103 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.5-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true} I0221 09:14:30.849044 488103 out.go:176] * Verifying Kubernetes components... I0221 09:14:30.847213 488103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0221 09:14:30.849093 488103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0221 09:14:30.847248 488103 addons.go:415] enableAddons start: toEnable=map[], additional=[] I0221 09:14:30.849158 488103 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220221091339-6550" I0221 09:14:30.847450 488103 config.go:176] Loaded profile config "no-preload-20220221091339-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5-rc.0 I0221 09:14:30.849178 488103 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220221091339-6550" W0221 09:14:30.849187 488103 addons.go:165] addon storage-provisioner should already be in state true I0221 09:14:30.849205 488103 host.go:66] Checking if "no-preload-20220221091339-6550" exists ... I0221 09:14:30.849235 488103 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220221091339-6550" I0221 09:14:30.849258 488103 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220221091339-6550" I0221 09:14:30.849610 488103 cli_runner.go:133] Run: docker container inspect no-preload-20220221091339-6550 --format={{.State.Status}} I0221 09:14:30.849624 488103 cli_runner.go:133] Run: docker container inspect no-preload-20220221091339-6550 --format={{.State.Status}} I0221 09:14:30.893964 488103 out.go:176] - Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0221 09:14:30.894066 488103 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml I0221 09:14:30.894080 488103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0221 09:14:30.894125 488103 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:14:30.896323 488103 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220221091339-6550" W0221 09:14:30.896344 488103 addons.go:165] addon default-storageclass should already be in state true I0221 09:14:30.896364 488103 host.go:66] Checking if "no-preload-20220221091339-6550" exists ... I0221 09:14:30.896685 488103 cli_runner.go:133] Run: docker container inspect no-preload-20220221091339-6550 --format={{.State.Status}} I0221 09:14:30.935800 488103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.67.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0221 09:14:30.938226 488103 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220221091339-6550" to be "Ready" ... I0221 09:14:30.942045 488103 node_ready.go:49] node "no-preload-20220221091339-6550" has status "Ready":"True" I0221 09:14:30.942067 488103 node_ready.go:38] duration metric: took 3.808409ms waiting for node "no-preload-20220221091339-6550" to be "Ready" ... I0221 09:14:30.942078 488103 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 09:14:30.942087 488103 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml I0221 09:14:30.942103 488103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0221 09:14:30.942161 488103 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:14:30.942716 488103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49414 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/no-preload-20220221091339-6550/id_rsa Username:docker} I0221 09:14:30.956536 488103 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20220221091339-6550" in "kube-system" namespace to be "Ready" ... I0221 09:14:30.978427 488103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49414 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/no-preload-20220221091339-6550/id_rsa Username:docker} I0221 09:14:31.007750 488103 pod_ready.go:92] pod "etcd-no-preload-20220221091339-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:14:31.007779 488103 pod_ready.go:81] duration metric: took 51.208167ms waiting for pod "etcd-no-preload-20220221091339-6550" in "kube-system" namespace to be "Ready" ... I0221 09:14:31.007794 488103 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20220221091339-6550" in "kube-system" namespace to be "Ready" ... I0221 09:14:31.016037 488103 pod_ready.go:92] pod "kube-apiserver-no-preload-20220221091339-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:14:31.016073 488103 pod_ready.go:81] duration metric: took 8.267725ms waiting for pod "kube-apiserver-no-preload-20220221091339-6550" in "kube-system" namespace to be "Ready" ... I0221 09:14:31.016086 488103 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20220221091339-6550" in "kube-system" namespace to be "Ready" ... I0221 09:14:31.023725 488103 pod_ready.go:92] pod "kube-controller-manager-no-preload-20220221091339-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:14:31.023749 488103 pod_ready.go:81] duration metric: took 7.654894ms waiting for pod "kube-controller-manager-no-preload-20220221091339-6550" in "kube-system" namespace to be "Ready" ... I0221 09:14:31.023763 488103 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hlrh9" in "kube-system" namespace to be "Ready" ... I0221 09:14:31.224660 488103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0221 09:14:31.325704 488103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0221 09:14:32.539312 488103 pod_ready.go:92] pod "kube-proxy-hlrh9" in "kube-system" namespace has status "Ready":"True" I0221 09:14:32.539346 488103 pod_ready.go:81] duration metric: took 1.515575512s waiting for pod "kube-proxy-hlrh9" in "kube-system" namespace to be "Ready" ... I0221 09:14:32.539355 488103 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20220221091339-6550" in "kube-system" namespace to be "Ready" ... I0221 09:14:32.543713 488103 pod_ready.go:92] pod "kube-scheduler-no-preload-20220221091339-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:14:32.543732 488103 pod_ready.go:81] duration metric: took 4.370791ms waiting for pod "kube-scheduler-no-preload-20220221091339-6550" in "kube-system" namespace to be "Ready" ... I0221 09:14:32.543739 488103 pod_ready.go:38] duration metric: took 1.601647944s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 09:14:32.543759 488103 api_server.go:51] waiting for apiserver process to appear ... I0221 09:14:32.543799 488103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0221 09:14:32.908548 488103 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.67.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.972707233s) I0221 09:14:32.908586 488103 start.go:777] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS I0221 09:14:32.921199 488103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.696493245s) I0221 09:14:32.940497 488103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.614757287s) I0221 09:14:32.940546 488103 api_server.go:71] duration metric: took 2.093358936s to wait for apiserver process to appear ... I0221 09:14:32.940562 488103 api_server.go:87] waiting for apiserver healthz status ... I0221 09:14:32.940606 488103 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ... I0221 09:14:30.366555 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:14:32.367235 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:14:32.942965 488103 out.go:176] * Enabled addons: default-storageclass, storage-provisioner I0221 09:14:32.943036 488103 addons.go:417] enableAddons completed in 2.095803648s I0221 09:14:32.946520 488103 api_server.go:266] https://192.168.67.2:8443/healthz returned 200: ok I0221 09:14:32.947385 488103 api_server.go:140] control plane version: v1.23.5-rc.0 I0221 09:14:32.947406 488103 api_server.go:130] duration metric: took 6.806136ms to wait for apiserver health ... I0221 09:14:32.947416 488103 system_pods.go:43] waiting for kube-system pods to appear ... I0221 09:14:33.005949 488103 system_pods.go:59] 8 kube-system pods found I0221 09:14:33.006016 488103 system_pods.go:61] "coredns-64897985d-cq4vt" [e8522b25-8c41-46a0-8d94-3f70aff8fc0d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:14:33.006034 488103 system_pods.go:61] "coredns-64897985d-t6lcp" [f53edf0b-bb68-4aa6-83d9-8e1a356dfda9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:14:33.006049 488103 system_pods.go:61] "etcd-no-preload-20220221091339-6550" [3fa24a94-f41c-4cf8-bcb9-4808033aef36] Running I0221 09:14:33.006058 488103 system_pods.go:61] "kube-apiserver-no-preload-20220221091339-6550" [e30a563f-2bc9-4894-9a2d-f87bdb6c96be] Running I0221 09:14:33.006070 488103 system_pods.go:61] "kube-controller-manager-no-preload-20220221091339-6550" [7300b4d8-82d8-49ca-b581-7e909cb1917e] Running I0221 09:14:33.006075 488103 system_pods.go:61] "kube-proxy-hlrh9" [751f88ad-01cb-49a0-947b-a4213748c80e] Running I0221 09:14:33.006080 488103 system_pods.go:61] "kube-scheduler-no-preload-20220221091339-6550" [fbffbf1c-d65f-4cbe-bea6-80bc5da28f8a] Running I0221 09:14:33.006091 488103 system_pods.go:61] "storage-provisioner" [6312fa75-6c0a-4240-b8cf-9dacbb061fa7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:14:33.006102 488103 system_pods.go:74] duration metric: took 58.679811ms to wait for pod list to return data ... I0221 09:14:33.006112 488103 default_sa.go:34] waiting for default service account to be created ... I0221 09:14:33.008987 488103 default_sa.go:45] found service account: "default" I0221 09:14:33.009019 488103 default_sa.go:55] duration metric: took 2.900219ms for default service account to be created ... I0221 09:14:33.009028 488103 system_pods.go:116] waiting for k8s-apps to be running ... I0221 09:14:33.143846 488103 system_pods.go:86] 8 kube-system pods found I0221 09:14:33.143877 488103 system_pods.go:89] "coredns-64897985d-cq4vt" [e8522b25-8c41-46a0-8d94-3f70aff8fc0d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:14:33.143884 488103 system_pods.go:89] "coredns-64897985d-t6lcp" [f53edf0b-bb68-4aa6-83d9-8e1a356dfda9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:14:33.143889 488103 system_pods.go:89] "etcd-no-preload-20220221091339-6550" [3fa24a94-f41c-4cf8-bcb9-4808033aef36] Running I0221 09:14:33.143899 488103 system_pods.go:89] "kube-apiserver-no-preload-20220221091339-6550" [e30a563f-2bc9-4894-9a2d-f87bdb6c96be] Running I0221 09:14:33.143906 488103 system_pods.go:89] "kube-controller-manager-no-preload-20220221091339-6550" [7300b4d8-82d8-49ca-b581-7e909cb1917e] Running I0221 09:14:33.143916 488103 system_pods.go:89] "kube-proxy-hlrh9" [751f88ad-01cb-49a0-947b-a4213748c80e] Running I0221 09:14:33.143923 488103 system_pods.go:89] "kube-scheduler-no-preload-20220221091339-6550" [fbffbf1c-d65f-4cbe-bea6-80bc5da28f8a] Running I0221 09:14:33.143939 488103 system_pods.go:89] "storage-provisioner" [6312fa75-6c0a-4240-b8cf-9dacbb061fa7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:14:33.143977 488103 retry.go:31] will retry after 263.082536ms: missing components: kube-dns I0221 09:14:33.412618 488103 system_pods.go:86] 8 kube-system pods found I0221 09:14:33.412650 488103 system_pods.go:89] "coredns-64897985d-cq4vt" [e8522b25-8c41-46a0-8d94-3f70aff8fc0d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:14:33.412658 488103 system_pods.go:89] "coredns-64897985d-t6lcp" [f53edf0b-bb68-4aa6-83d9-8e1a356dfda9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:14:33.412664 488103 system_pods.go:89] "etcd-no-preload-20220221091339-6550" [3fa24a94-f41c-4cf8-bcb9-4808033aef36] Running I0221 09:14:33.412670 488103 system_pods.go:89] "kube-apiserver-no-preload-20220221091339-6550" [e30a563f-2bc9-4894-9a2d-f87bdb6c96be] Running I0221 09:14:33.412677 488103 system_pods.go:89] "kube-controller-manager-no-preload-20220221091339-6550" [7300b4d8-82d8-49ca-b581-7e909cb1917e] Running I0221 09:14:33.412683 488103 system_pods.go:89] "kube-proxy-hlrh9" [751f88ad-01cb-49a0-947b-a4213748c80e] Running I0221 09:14:33.412689 488103 system_pods.go:89] "kube-scheduler-no-preload-20220221091339-6550" [fbffbf1c-d65f-4cbe-bea6-80bc5da28f8a] Running I0221 09:14:33.412702 488103 system_pods.go:89] "storage-provisioner" [6312fa75-6c0a-4240-b8cf-9dacbb061fa7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:14:33.412722 488103 retry.go:31] will retry after 381.329545ms: missing components: kube-dns I0221 09:14:33.799233 488103 system_pods.go:86] 8 kube-system pods found I0221 09:14:33.799261 488103 system_pods.go:89] "coredns-64897985d-cq4vt" [e8522b25-8c41-46a0-8d94-3f70aff8fc0d] Running I0221 09:14:33.799271 488103 system_pods.go:89] "coredns-64897985d-t6lcp" [f53edf0b-bb68-4aa6-83d9-8e1a356dfda9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:14:33.799277 488103 system_pods.go:89] "etcd-no-preload-20220221091339-6550" [3fa24a94-f41c-4cf8-bcb9-4808033aef36] Running I0221 09:14:33.799282 488103 system_pods.go:89] "kube-apiserver-no-preload-20220221091339-6550" [e30a563f-2bc9-4894-9a2d-f87bdb6c96be] Running I0221 09:14:33.799286 488103 system_pods.go:89] "kube-controller-manager-no-preload-20220221091339-6550" [7300b4d8-82d8-49ca-b581-7e909cb1917e] Running I0221 09:14:33.799290 488103 system_pods.go:89] "kube-proxy-hlrh9" [751f88ad-01cb-49a0-947b-a4213748c80e] Running I0221 09:14:33.799296 488103 system_pods.go:89] "kube-scheduler-no-preload-20220221091339-6550" [fbffbf1c-d65f-4cbe-bea6-80bc5da28f8a] Running I0221 09:14:33.799301 488103 system_pods.go:89] "storage-provisioner" [6312fa75-6c0a-4240-b8cf-9dacbb061fa7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:14:33.799307 488103 system_pods.go:126] duration metric: took 790.274513ms to wait for k8s-apps to be running ... I0221 09:14:33.799318 488103 system_svc.go:44] waiting for kubelet service to be running .... I0221 09:14:33.799356 488103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0221 09:14:33.809667 488103 system_svc.go:56] duration metric: took 10.340697ms WaitForService to wait for kubelet. I0221 09:14:33.809696 488103 kubeadm.go:548] duration metric: took 2.962508753s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ... I0221 09:14:33.809717 488103 node_conditions.go:102] verifying NodePressure condition ... I0221 09:14:33.812958 488103 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki I0221 09:14:33.812985 488103 node_conditions.go:123] node cpu capacity is 8 I0221 09:14:33.812996 488103 node_conditions.go:105] duration metric: took 3.275185ms to run NodePressure ... I0221 09:14:33.813006 488103 start.go:213] waiting for startup goroutines ... I0221 09:14:33.847305 488103 start.go:496] kubectl: 1.23.4, cluster: 1.23.5-rc.0 (minor skew: 0) I0221 09:14:33.849970 488103 out.go:176] * Done! kubectl is now configured to use "no-preload-20220221091339-6550" cluster and "default" namespace by default I0221 09:14:34.866648 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:14:37.367571 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" * * ==> Docker <== * -- Logs begin at Mon 2022-02-21 09:03:40 UTC, end at Mon 2022-02-21 09:14:39 UTC. -- Feb 21 09:03:42 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:03:42.303912260Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 21 09:03:42 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:03:42.303942802Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 21 09:03:42 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:03:42.303968007Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Feb 21 09:03:42 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:03:42.303982652Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 21 09:03:42 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:03:42.308479245Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Feb 21 09:03:42 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:03:42.314490990Z" level=warning msg="Your kernel does not support CPU realtime scheduler" Feb 21 09:03:42 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:03:42.314522260Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 21 09:03:42 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:03:42.314528141Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 21 09:03:42 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:03:42.314672962Z" level=info msg="Loading containers: start." Feb 21 09:03:42 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:03:42.397445175Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 21 09:03:42 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:03:42.432371517Z" level=info msg="Loading containers: done." Feb 21 09:03:42 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:03:42.443993121Z" level=info msg="Docker daemon" commit=459d0df graphdriver(s)=overlay2 version=20.10.12 Feb 21 09:03:42 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:03:42.444056839Z" level=info msg="Daemon has completed initialization" Feb 21 09:03:42 enable-default-cni-20220221084933-6550 systemd[1]: Started Docker Application Container Engine. Feb 21 09:03:42 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:03:42.465398038Z" level=info msg="API listen on [::]:2376" Feb 21 09:03:42 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:03:42.472253118Z" level=info msg="API listen on /var/run/docker.sock" Feb 21 09:04:25 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:04:25.238038406Z" level=info msg="ignoring event" container=5afd280d6ca1170ae488a5b552e3a1a019ffc651badfdefae21cf38b5344b4fe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:04:25 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:04:25.358291189Z" level=info msg="ignoring event" container=dd88f9a2c29fd0d324bb0cc243731be6f6ad977286b27b60b3c81c02bc5112e3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:04:46 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:04:46.865768077Z" level=info msg="ignoring event" container=72640011ea4692f842704d801b1fd6c5cdec01b158a87acedaada04ba21cbd58 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:05:17 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:05:17.070485776Z" level=info msg="ignoring event" container=5753550452bdc181a9f3a1b4bde53fcd818a97bac42202af2c5ab08a1b8eaf9b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:05:59 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:05:59.550940087Z" level=info msg="ignoring event" container=767d8f72b700525bc491176eb71e4e18f6edca1bc1fb1d91fdfbf8869232cde6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:06:58 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:06:58.545862700Z" level=info msg="ignoring event" container=232a60522c9e23285ef5a7fb7ded9674b1e879db7e9c46118658cf028dfc1f96 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:08:12 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:08:12.567345086Z" level=info msg="ignoring event" container=987fc4d25f59800358d2084952b6585242449693072bbdbe977e71cecd1ad391 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:10:15 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:10:15.566326284Z" level=info msg="ignoring event" container=1a6d835452a795794468d7dbb811bb2e752edb68fe42ac0215cf9efd022eaf85 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:13:32 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:13:32.546206381Z" level=info msg="ignoring event" container=7e5453a7bbe4ab760a7f1e881bf9571a0c768135e214c50ab0e1fd6caa3623cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 7e5453a7bbe4a 6e38f40d628db About a minute ago Exited storage-provisioner 6 cfeaa1bfff01b fcb59e0ee67e6 k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 6 minutes ago Running dnsutils 0 bdea8f4c61ad5 3eab59e55df1e a4ca41631cc7a 10 minutes ago Running coredns 0 b44b7a1956d4f b198c3fa15580 2114245ec4d6b 10 minutes ago Running kube-proxy 0 79ea26ef70591 6e0b11913ead7 aceacb6244f9f 10 minutes ago Running kube-scheduler 0 7482f2936a907 22f36e8efd018 62930710c9634 10 minutes ago Running kube-apiserver 0 996ed6b04f1a3 2d52356b4d441 25f8c7f3da61c 10 minutes ago Running etcd 0 33b91e247ac96 9da67fbcae637 25444908517a5 10 minutes ago Running kube-controller-manager 0 b61f3663b8c4d * * ==> coredns [3eab59e55df1] <== * [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" * * ==> describe nodes <== * Name: enable-default-cni-20220221084933-6550 Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=enable-default-cni-20220221084933-6550 kubernetes.io/os=linux minikube.k8s.io/commit=9a4c39b14752448ee8b5da7f8bb397c2a16d9ea9 minikube.k8s.io/name=enable-default-cni-20220221084933-6550 minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2022_02_21T09_04_01_0700 minikube.k8s.io/version=v1.25.1 node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 21 Feb 2022 09:03:54 +0000 Taints: Unschedulable: false Lease: HolderIdentity: enable-default-cni-20220221084933-6550 AcquireTime: RenewTime: Mon, 21 Feb 2022 09:14:35 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Mon, 21 Feb 2022 09:14:13 +0000 Mon, 21 Feb 2022 09:03:53 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Mon, 21 Feb 2022 09:14:13 +0000 Mon, 21 Feb 2022 09:03:53 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Mon, 21 Feb 2022 09:14:13 +0000 Mon, 21 Feb 2022 09:03:53 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Mon, 21 Feb 2022 09:14:13 +0000 Mon, 21 Feb 2022 09:04:11 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.58.2 Hostname: enable-default-cni-20220221084933-6550 Capacity: cpu: 8 ephemeral-storage: 304695084Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32874648Ki pods: 110 Allocatable: cpu: 8 ephemeral-storage: 304695084Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32874648Ki pods: 110 System Info: Machine ID: b6a262faae404a5db719705fd34b5c8b System UUID: 88a905a7-4360-4926-9a27-46e272953df7 Boot ID: 36f9c729-2a96-4807-bb74-314dc2113999 Kernel Version: 5.11.0-1029-gcp OS Image: Ubuntu 20.04.2 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.12 Kubelet Version: v1.23.4 Kube-Proxy Version: v1.23.4 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (8 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- default netcat-668db85669-fm848 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m12s kube-system coredns-64897985d-mr75l 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 10m kube-system etcd-enable-default-cni-20220221084933-6550 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 10m kube-system kube-apiserver-enable-default-cni-20220221084933-6550 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 10m kube-system kube-controller-manager-enable-default-cni-20220221084933-6550 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 10m kube-system kube-proxy-z67wt 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 10m kube-system kube-scheduler-enable-default-cni-20220221084933-6550 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 10m kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 10m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 750m (9%!)(MISSING) 0 (0%!)(MISSING) memory 170Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 10m kube-proxy Normal Starting 10m kubelet Starting kubelet. Normal NodeHasNoDiskPressure 10m (x4 over 10m) kubelet Node enable-default-cni-20220221084933-6550 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 10m (x4 over 10m) kubelet Node enable-default-cni-20220221084933-6550 status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 10m kubelet Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 10m (x4 over 10m) kubelet Node enable-default-cni-20220221084933-6550 status is now: NodeHasSufficientMemory Normal Starting 10m kubelet Starting kubelet. Normal NodeHasNoDiskPressure 10m kubelet Node enable-default-cni-20220221084933-6550 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 10m kubelet Node enable-default-cni-20220221084933-6550 status is now: NodeHasSufficientPID Normal NodeNotReady 10m kubelet Node enable-default-cni-20220221084933-6550 status is now: NodeNotReady Normal NodeAllocatableEnforced 10m kubelet Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 10m kubelet Node enable-default-cni-20220221084933-6550 status is now: NodeHasSufficientMemory Normal NodeReady 10m kubelet Node enable-default-cni-20220221084933-6550 status is now: NodeReady * * ==> dmesg <== * [ +0.000008] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 [ +2.963841] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 [ +1.035853] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 [ +1.023933] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 [Feb21 09:14] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +1.035516] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +1.019972] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +2.943777] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000008] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +1.027861] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +1.019959] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +2.951870] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +1.015815] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +1.027946] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 * * ==> etcd [2d52356b4d44] <== * {"level":"info","ts":"2022-02-21T09:04:01.007Z","caller":"traceutil/trace.go:171","msg":"trace[1662587402] transaction","detail":"{read_only:false; response_revision:285; number_of_response:1; }","duration":"343.853384ms","start":"2022-02-21T09:04:00.663Z","end":"2022-02-21T09:04:01.007Z","steps":["trace[1662587402] 'process raft request' (duration: 171.634431ms)","trace[1662587402] 'compare' (duration: 171.878651ms)"],"step_count":2} {"level":"warn","ts":"2022-02-21T09:04:01.007Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-02-21T09:04:00.663Z","time spent":"343.918406ms","remote":"127.0.0.1:34878","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":263,"response count":0,"response size":39,"request content":"compare: success:> failure: >"} {"level":"info","ts":"2022-02-21T09:04:01.007Z","caller":"traceutil/trace.go:171","msg":"trace[1430197607] transaction","detail":"{read_only:false; response_revision:287; number_of_response:1; }","duration":"340.353611ms","start":"2022-02-21T09:04:00.667Z","end":"2022-02-21T09:04:01.007Z","steps":["trace[1430197607] 'process raft request' (duration: 340.250258ms)"],"step_count":1} {"level":"info","ts":"2022-02-21T09:04:01.007Z","caller":"traceutil/trace.go:171","msg":"trace[2126245749] transaction","detail":"{read_only:false; response_revision:286; number_of_response:1; }","duration":"343.979445ms","start":"2022-02-21T09:04:00.663Z","end":"2022-02-21T09:04:01.007Z","steps":["trace[2126245749] 'process raft request' (duration: 343.796716ms)"],"step_count":1} {"level":"info","ts":"2022-02-21T09:04:01.007Z","caller":"traceutil/trace.go:171","msg":"trace[1081920735] transaction","detail":"{read_only:false; response_revision:290; number_of_response:1; }","duration":"100.157444ms","start":"2022-02-21T09:04:00.907Z","end":"2022-02-21T09:04:01.007Z","steps":["trace[1081920735] 'process raft request' (duration: 100.127415ms)"],"step_count":1} {"level":"warn","ts":"2022-02-21T09:04:01.007Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-02-21T09:04:00.663Z","time spent":"344.019188ms","remote":"127.0.0.1:34856","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":718,"response count":0,"response size":39,"request content":"compare: success:> failure:<>"} {"level":"info","ts":"2022-02-21T09:04:01.007Z","caller":"traceutil/trace.go:171","msg":"trace[223105609] transaction","detail":"{read_only:false; response_revision:289; number_of_response:1; }","duration":"297.76584ms","start":"2022-02-21T09:04:00.709Z","end":"2022-02-21T09:04:01.007Z","steps":["trace[223105609] 'process raft request' (duration: 297.651356ms)"],"step_count":1} {"level":"info","ts":"2022-02-21T09:04:01.007Z","caller":"traceutil/trace.go:171","msg":"trace[1844690839] transaction","detail":"{read_only:false; response_revision:288; number_of_response:1; }","duration":"297.835735ms","start":"2022-02-21T09:04:00.709Z","end":"2022-02-21T09:04:01.007Z","steps":["trace[1844690839] 'process raft request' (duration: 297.623192ms)"],"step_count":1} {"level":"info","ts":"2022-02-21T09:04:01.007Z","caller":"traceutil/trace.go:171","msg":"trace[1556602659] linearizableReadLoop","detail":"{readStateIndex:297; appliedIndex:291; }","duration":"172.477897ms","start":"2022-02-21T09:04:00.835Z","end":"2022-02-21T09:04:01.007Z","steps":["trace[1556602659] 'read index received' (duration: 171.241323ms)","trace[1556602659] 'applied index is now lower than readState.Index' (duration: 1.235517ms)"],"step_count":2} {"level":"warn","ts":"2022-02-21T09:04:01.007Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-02-21T09:04:00.667Z","time spent":"340.435747ms","remote":"127.0.0.1:34966","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3066,"response count":0,"response size":39,"request content":"compare: success:> failure:<>"} {"level":"info","ts":"2022-02-21T09:04:01.007Z","caller":"traceutil/trace.go:171","msg":"trace[121801330] transaction","detail":"{read_only:false; number_of_response:0; response_revision:287; }","duration":"297.95295ms","start":"2022-02-21T09:04:00.709Z","end":"2022-02-21T09:04:01.007Z","steps":["trace[121801330] 'process raft request' (duration: 297.651332ms)"],"step_count":1} {"level":"warn","ts":"2022-02-21T09:04:01.007Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"344.268949ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:353"} {"level":"warn","ts":"2022-02-21T09:04:01.007Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"298.608104ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-enable-default-cni-20220221084933-6550\" ","response":"range_response_count:1 size:5797"} {"level":"info","ts":"2022-02-21T09:04:01.007Z","caller":"traceutil/trace.go:171","msg":"trace[1271842616] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:290; }","duration":"344.297886ms","start":"2022-02-21T09:04:00.663Z","end":"2022-02-21T09:04:01.007Z","steps":["trace[1271842616] 'agreement among raft nodes before linearized reading' (duration: 344.224087ms)"],"step_count":1} {"level":"info","ts":"2022-02-21T09:04:01.007Z","caller":"traceutil/trace.go:171","msg":"trace[543956102] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-enable-default-cni-20220221084933-6550; range_end:; response_count:1; response_revision:290; }","duration":"298.63611ms","start":"2022-02-21T09:04:00.709Z","end":"2022-02-21T09:04:01.007Z","steps":["trace[543956102] 'agreement among raft nodes before linearized reading' (duration: 298.534406ms)"],"step_count":1} {"level":"warn","ts":"2022-02-21T09:04:01.007Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-02-21T09:04:00.663Z","time spent":"344.328322ms","remote":"127.0.0.1:34870","response type":"/etcdserverpb.KV/Range","request count":0,"request size":34,"response count":1,"response size":376,"request content":"key:\"/registry/namespaces/kube-system\" "} {"level":"info","ts":"2022-02-21T09:09:56.180Z","caller":"traceutil/trace.go:171","msg":"trace[1779109173] transaction","detail":"{read_only:false; response_revision:653; number_of_response:1; }","duration":"235.577473ms","start":"2022-02-21T09:09:55.944Z","end":"2022-02-21T09:09:56.180Z","steps":["trace[1779109173] 'process raft request' (duration: 138.096663ms)","trace[1779109173] 'compare' (duration: 97.35885ms)"],"step_count":2} {"level":"info","ts":"2022-02-21T09:13:52.358Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":635} {"level":"info","ts":"2022-02-21T09:13:52.359Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":635,"took":"674.848µs"} {"level":"warn","ts":"2022-02-21T09:14:06.231Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"182.451204ms","expected-duration":"100ms","prefix":"","request":"header: txn: success:> failure: >>","response":"size:16"} {"level":"info","ts":"2022-02-21T09:14:06.231Z","caller":"traceutil/trace.go:171","msg":"trace[1418751583] transaction","detail":"{read_only:false; response_revision:709; number_of_response:1; }","duration":"271.569792ms","start":"2022-02-21T09:14:05.960Z","end":"2022-02-21T09:14:06.231Z","steps":["trace[1418751583] 'process raft request' (duration: 88.954187ms)","trace[1418751583] 'compare' (duration: 182.355848ms)"],"step_count":2} {"level":"warn","ts":"2022-02-21T09:14:11.373Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"148.076706ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/\" range_end:\"/registry/daemonsets0\" count_only:true ","response":"range_response_count:0 size:7"} {"level":"info","ts":"2022-02-21T09:14:11.373Z","caller":"traceutil/trace.go:171","msg":"trace[2134174816] range","detail":"{range_begin:/registry/daemonsets/; range_end:/registry/daemonsets0; response_count:0; response_revision:709; }","duration":"148.197037ms","start":"2022-02-21T09:14:11.225Z","end":"2022-02-21T09:14:11.373Z","steps":["trace[2134174816] 'agreement among raft nodes before linearized reading' (duration: 55.251314ms)","trace[2134174816] 'count revisions from in-memory index tree' (duration: 92.813553ms)"],"step_count":2} {"level":"warn","ts":"2022-02-21T09:14:11.373Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"165.013832ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/\" range_end:\"/registry/roles0\" count_only:true ","response":"range_response_count:0 size:7"} {"level":"info","ts":"2022-02-21T09:14:11.373Z","caller":"traceutil/trace.go:171","msg":"trace[402769945] range","detail":"{range_begin:/registry/roles/; range_end:/registry/roles0; response_count:0; response_revision:709; }","duration":"165.219559ms","start":"2022-02-21T09:14:11.208Z","end":"2022-02-21T09:14:11.373Z","steps":["trace[402769945] 'agreement among raft nodes before linearized reading' (duration: 72.164621ms)","trace[402769945] 'count revisions from in-memory index tree' (duration: 92.824451ms)"],"step_count":2} * * ==> kernel <== * 09:14:40 up 57 min, 0 users, load average: 1.17, 1.46, 2.35 Linux enable-default-cni-20220221084933-6550 5.11.0-1029-gcp #33~20.04.3-Ubuntu SMP Tue Jan 18 12:03:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.2 LTS" * * ==> kube-apiserver [22f36e8efd01] <== * I0221 09:03:56.068930 1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1] W0221 09:03:56.465277 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2] I0221 09:03:56.466317 1 controller.go:611] quota admission added evaluator for: endpoints I0221 09:03:56.564935 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io I0221 09:03:56.932173 1 controller.go:611] quota admission added evaluator for: serviceaccounts {"level":"warn","ts":"2022-02-21T09:04:00.099Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0012e88c0/#initially=[https://127.0.0.1:2379]","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"} {"level":"warn","ts":"2022-02-21T09:04:00.099Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00136a380/#initially=[https://127.0.0.1:2379]","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"} E0221 09:04:00.099616 1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout E0221 09:04:00.099640 1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout E0221 09:04:00.099691 1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 22.201µs, panicked: false, err: context canceled, panic-reason: E0221 09:04:00.099701 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout E0221 09:04:00.099731 1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 16.253µs, panicked: false, err: context canceled, panic-reason: E0221 09:04:00.100928 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout E0221 09:04:00.102049 1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout E0221 09:04:00.104227 1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout E0221 09:04:00.106825 1 timeout.go:137] post-timeout activity - time-elapsed: 7.159098ms, PATCH "/api/v1/namespaces/default/events/enable-default-cni-20220221084933-6550.16d5c1b6db877374" result: E0221 09:04:00.107566 1 timeout.go:137] post-timeout activity - time-elapsed: 8.036034ms, PATCH "/api/v1/namespaces/kube-system/pods/etcd-enable-default-cni-20220221084933-6550/status" result: I0221 09:04:00.536074 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io I0221 09:04:00.666536 1 controller.go:611] quota admission added evaluator for: deployments.apps I0221 09:04:01.017550 1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10] I0221 09:04:01.029628 1 controller.go:611] quota admission added evaluator for: daemonsets.apps I0221 09:04:13.522118 1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps I0221 09:04:13.628443 1 controller.go:611] quota admission added evaluator for: replicasets.apps I0221 09:04:14.428054 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io I0221 09:08:27.303551 1 alloc.go:329] "allocated clusterIPs" service="default/netcat" clusterIPs=map[IPv4:10.99.224.251] * * ==> kube-controller-manager [9da67fbcae63] <== * I0221 09:04:13.466436 1 shared_informer.go:247] Caches are synced for daemon sets I0221 09:04:13.466456 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown I0221 09:04:13.504429 1 shared_informer.go:247] Caches are synced for persistent volume I0221 09:04:13.505653 1 shared_informer.go:247] Caches are synced for endpoint_slice I0221 09:04:13.514947 1 range_allocator.go:374] Set node enable-default-cni-20220221084933-6550 PodCIDR to [10.244.0.0/24] I0221 09:04:13.527153 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-z67wt" I0221 09:04:13.538371 1 shared_informer.go:247] Caches are synced for ReplicaSet I0221 09:04:13.602493 1 shared_informer.go:247] Caches are synced for attach detach I0221 09:04:13.613417 1 shared_informer.go:247] Caches are synced for disruption I0221 09:04:13.613442 1 disruption.go:371] Sending events to api server. I0221 09:04:13.614327 1 shared_informer.go:247] Caches are synced for deployment I0221 09:04:13.616020 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring I0221 09:04:13.631822 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2" I0221 09:04:13.641662 1 shared_informer.go:247] Caches are synced for endpoint I0221 09:04:13.646936 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-4pdmv" I0221 09:04:13.654908 1 shared_informer.go:247] Caches are synced for resource quota I0221 09:04:13.702047 1 shared_informer.go:247] Caches are synced for resource quota I0221 09:04:13.705044 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-mr75l" I0221 09:04:14.079886 1 shared_informer.go:247] Caches are synced for garbage collector I0221 09:04:14.114552 1 shared_informer.go:247] Caches are synced for garbage collector I0221 09:04:14.114583 1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0221 09:04:14.316540 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1" I0221 09:04:14.322975 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-4pdmv" I0221 09:08:27.304208 1 event.go:294] "Event occurred" object="default/netcat" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set netcat-668db85669 to 1" I0221 09:08:27.312062 1 event.go:294] "Event occurred" object="default/netcat-668db85669" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: netcat-668db85669-fm848" * * ==> kube-proxy [b198c3fa1558] <== * I0221 09:04:14.332138 1 node.go:163] Successfully retrieved node IP: 192.168.58.2 I0221 09:04:14.332226 1 server_others.go:138] "Detected node IP" address="192.168.58.2" I0221 09:04:14.332285 1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode="" I0221 09:04:14.423508 1 server_others.go:206] "Using iptables Proxier" I0221 09:04:14.423558 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4 I0221 09:04:14.423570 1 server_others.go:214] "Creating dualStackProxier for iptables" I0221 09:04:14.423591 1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6" I0221 09:04:14.424432 1 server.go:656] "Version info" version="v1.23.4" I0221 09:04:14.425726 1 config.go:226] "Starting endpoint slice config controller" I0221 09:04:14.425757 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I0221 09:04:14.425822 1 config.go:317] "Starting service config controller" I0221 09:04:14.425841 1 shared_informer.go:240] Waiting for caches to sync for service config I0221 09:04:14.526220 1 shared_informer.go:247] Caches are synced for service config I0221 09:04:14.526281 1 shared_informer.go:247] Caches are synced for endpoint slice config * * ==> kube-scheduler [6e0b11913ead] <== * W0221 09:03:54.430880 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0221 09:03:54.430893 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0221 09:03:54.430879 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0221 09:03:54.430925 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0221 09:03:54.431681 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0221 09:03:54.431730 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0221 09:03:54.502529 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0221 09:03:54.504139 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0221 09:03:55.250688 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0221 09:03:55.250722 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0221 09:03:55.362713 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0221 09:03:55.362749 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0221 09:03:55.411193 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0221 09:03:55.411222 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0221 09:03:55.505977 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0221 09:03:55.506014 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0221 09:03:55.539567 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0221 09:03:55.539607 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0221 09:03:55.556891 1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0221 09:03:55.556925 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0221 09:03:55.565614 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0221 09:03:55.565651 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0221 09:03:55.588882 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0221 09:03:55.588920 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope I0221 09:03:58.425033 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Mon 2022-02-21 09:03:40 UTC, end at Mon 2022-02-21 09:14:40 UTC. -- Feb 21 09:11:49 enable-default-cni-20220221084933-6550 kubelet[1965]: E0221 09:11:49.406721 1965 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b)\"" pod="kube-system/storage-provisioner" podUID=8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b Feb 21 09:12:01 enable-default-cni-20220221084933-6550 kubelet[1965]: I0221 09:12:01.406349 1965 scope.go:110] "RemoveContainer" containerID="1a6d835452a795794468d7dbb811bb2e752edb68fe42ac0215cf9efd022eaf85" Feb 21 09:12:01 enable-default-cni-20220221084933-6550 kubelet[1965]: E0221 09:12:01.406655 1965 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b)\"" pod="kube-system/storage-provisioner" podUID=8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b Feb 21 09:12:16 enable-default-cni-20220221084933-6550 kubelet[1965]: I0221 09:12:16.406724 1965 scope.go:110] "RemoveContainer" containerID="1a6d835452a795794468d7dbb811bb2e752edb68fe42ac0215cf9efd022eaf85" Feb 21 09:12:16 enable-default-cni-20220221084933-6550 kubelet[1965]: E0221 09:12:16.407062 1965 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b)\"" pod="kube-system/storage-provisioner" podUID=8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b Feb 21 09:12:29 enable-default-cni-20220221084933-6550 kubelet[1965]: I0221 09:12:29.406058 1965 scope.go:110] "RemoveContainer" containerID="1a6d835452a795794468d7dbb811bb2e752edb68fe42ac0215cf9efd022eaf85" Feb 21 09:12:29 enable-default-cni-20220221084933-6550 kubelet[1965]: E0221 09:12:29.406368 1965 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b)\"" pod="kube-system/storage-provisioner" podUID=8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b Feb 21 09:12:40 enable-default-cni-20220221084933-6550 kubelet[1965]: I0221 09:12:40.407282 1965 scope.go:110] "RemoveContainer" containerID="1a6d835452a795794468d7dbb811bb2e752edb68fe42ac0215cf9efd022eaf85" Feb 21 09:12:40 enable-default-cni-20220221084933-6550 kubelet[1965]: E0221 09:12:40.408161 1965 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b)\"" pod="kube-system/storage-provisioner" podUID=8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b Feb 21 09:12:51 enable-default-cni-20220221084933-6550 kubelet[1965]: I0221 09:12:51.406355 1965 scope.go:110] "RemoveContainer" containerID="1a6d835452a795794468d7dbb811bb2e752edb68fe42ac0215cf9efd022eaf85" Feb 21 09:12:51 enable-default-cni-20220221084933-6550 kubelet[1965]: E0221 09:12:51.406601 1965 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b)\"" pod="kube-system/storage-provisioner" podUID=8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b Feb 21 09:13:02 enable-default-cni-20220221084933-6550 kubelet[1965]: I0221 09:13:02.406042 1965 scope.go:110] "RemoveContainer" containerID="1a6d835452a795794468d7dbb811bb2e752edb68fe42ac0215cf9efd022eaf85" Feb 21 09:13:33 enable-default-cni-20220221084933-6550 kubelet[1965]: I0221 09:13:33.549421 1965 scope.go:110] "RemoveContainer" containerID="1a6d835452a795794468d7dbb811bb2e752edb68fe42ac0215cf9efd022eaf85" Feb 21 09:13:33 enable-default-cni-20220221084933-6550 kubelet[1965]: I0221 09:13:33.549735 1965 scope.go:110] "RemoveContainer" containerID="7e5453a7bbe4ab760a7f1e881bf9571a0c768135e214c50ab0e1fd6caa3623cd" Feb 21 09:13:33 enable-default-cni-20220221084933-6550 kubelet[1965]: E0221 09:13:33.550008 1965 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b)\"" pod="kube-system/storage-provisioner" podUID=8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b Feb 21 09:13:48 enable-default-cni-20220221084933-6550 kubelet[1965]: I0221 09:13:48.406541 1965 scope.go:110] "RemoveContainer" containerID="7e5453a7bbe4ab760a7f1e881bf9571a0c768135e214c50ab0e1fd6caa3623cd" Feb 21 09:13:48 enable-default-cni-20220221084933-6550 kubelet[1965]: E0221 09:13:48.406830 1965 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b)\"" pod="kube-system/storage-provisioner" podUID=8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b Feb 21 09:14:02 enable-default-cni-20220221084933-6550 kubelet[1965]: I0221 09:14:02.406168 1965 scope.go:110] "RemoveContainer" containerID="7e5453a7bbe4ab760a7f1e881bf9571a0c768135e214c50ab0e1fd6caa3623cd" Feb 21 09:14:02 enable-default-cni-20220221084933-6550 kubelet[1965]: E0221 09:14:02.406473 1965 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b)\"" pod="kube-system/storage-provisioner" podUID=8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b Feb 21 09:14:13 enable-default-cni-20220221084933-6550 kubelet[1965]: I0221 09:14:13.406773 1965 scope.go:110] "RemoveContainer" containerID="7e5453a7bbe4ab760a7f1e881bf9571a0c768135e214c50ab0e1fd6caa3623cd" Feb 21 09:14:13 enable-default-cni-20220221084933-6550 kubelet[1965]: E0221 09:14:13.407121 1965 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b)\"" pod="kube-system/storage-provisioner" podUID=8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b Feb 21 09:14:25 enable-default-cni-20220221084933-6550 kubelet[1965]: I0221 09:14:25.406669 1965 scope.go:110] "RemoveContainer" containerID="7e5453a7bbe4ab760a7f1e881bf9571a0c768135e214c50ab0e1fd6caa3623cd" Feb 21 09:14:25 enable-default-cni-20220221084933-6550 kubelet[1965]: E0221 09:14:25.406881 1965 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b)\"" pod="kube-system/storage-provisioner" podUID=8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b Feb 21 09:14:39 enable-default-cni-20220221084933-6550 kubelet[1965]: I0221 09:14:39.406097 1965 scope.go:110] "RemoveContainer" containerID="7e5453a7bbe4ab760a7f1e881bf9571a0c768135e214c50ab0e1fd6caa3623cd" Feb 21 09:14:39 enable-default-cni-20220221084933-6550 kubelet[1965]: E0221 09:14:39.406312 1965 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b)\"" pod="kube-system/storage-provisioner" podUID=8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b * * ==> storage-provisioner [7e5453a7bbe4] <== * I0221 09:13:02.528428 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... F0221 09:13:32.530476 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout -- /stdout -- helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p enable-default-cni-20220221084933-6550 -n enable-default-cni-20220221084933-6550 helpers_test.go:262: (dbg) Run: kubectl --context enable-default-cni-20220221084933-6550 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running helpers_test.go:271: non-running pods: helpers_test.go:273: ======> post-mortem[TestNetworkPlugins/group/enable-default-cni]: describe non-running pods <====== helpers_test.go:276: (dbg) Run: kubectl --context enable-default-cni-20220221084933-6550 describe pod helpers_test.go:276: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220221084933-6550 describe pod : exit status 1 (39.997986ms) ** stderr ** error: resource name may not be empty ** /stderr ** helpers_test.go:278: kubectl --context enable-default-cni-20220221084933-6550 describe pod : exit status 1 helpers_test.go:176: Cleaning up "enable-default-cni-20220221084933-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p enable-default-cni-20220221084933-6550 === CONT TestStartStop/group/no-preload/serial/DeployApp start_stop_delete_test.go:181: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.012222096s start_stop_delete_test.go:181: (dbg) Run: kubectl --context no-preload-20220221091339-6550 exec busybox -- /bin/sh -c "ulimit -n" === RUN TestStartStop/group/no-preload/serial/EnableAddonWhileActive start_stop_delete_test.go:190: (dbg) Run: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20220221091339-6550 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain start_stop_delete_test.go:200: (dbg) Run: kubectl --context no-preload-20220221091339-6550 describe deploy/metrics-server -n kube-system === RUN TestStartStop/group/no-preload/serial/Stop start_stop_delete_test.go:213: (dbg) Run: out/minikube-linux-amd64 stop -p no-preload-20220221091339-6550 --alsologtostderr -v=3 === CONT TestNetworkPlugins/group/kubenet/DNS net_test.go:163: (dbg) Run: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default === CONT TestNetworkPlugins/group/enable-default-cni helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p enable-default-cni-20220221084933-6550: (2.987234422s) === CONT TestStartStop/group/embed-certs === RUN TestStartStop/group/embed-certs/serial === RUN TestStartStop/group/embed-certs/serial/FirstStart start_stop_delete_test.go:171: (dbg) Run: out/minikube-linux-amd64 start -p embed-certs-20220221091443-6550 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --container-runtime=docker --kubernetes-version=v1.23.4 === CONT TestStartStop/group/no-preload/serial/Stop start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-20220221091339-6550 --alsologtostderr -v=3: (10.877192373s) === RUN TestStartStop/group/no-preload/serial/EnableAddonAfterStop start_stop_delete_test.go:224: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220221091339-6550 -n no-preload-20220221091339-6550 start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220221091339-6550 -n no-preload-20220221091339-6550: exit status 7 (114.199649ms) -- stdout -- Stopped -- /stdout -- start_stop_delete_test.go:224: status error: exit status 7 (may be ok) start_stop_delete_test.go:231: (dbg) Run: out/minikube-linux-amd64 addons enable dashboard -p no-preload-20220221091339-6550 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 === RUN TestStartStop/group/no-preload/serial/SecondStart start_stop_delete_test.go:241: (dbg) Run: out/minikube-linux-amd64 start -p no-preload-20220221091339-6550 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --container-runtime=docker --kubernetes-version=v1.23.5-rc.0 E0221 09:14:56.952864 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory === CONT TestNetworkPlugins/group/kubenet/DNS net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.136446775s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 09:15:04.010193 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory E0221 09:15:10.799839 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory net_test.go:163: (dbg) Run: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.123428364s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.13232504s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 09:16:16.370294 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory E0221 09:16:18.873156 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory E0221 09:16:25.930686 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory E0221 09:16:33.843826 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory E0221 09:16:46.065510 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory net_test.go:163: (dbg) Run: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.140494652s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 09:17:13.752037 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory E0221 09:17:30.568686 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory net_test.go:163: (dbg) Run: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default E0221 09:18:27.320235 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory E0221 09:18:27.325462 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory E0221 09:18:27.335697 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory E0221 09:18:27.355954 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory E0221 09:18:27.396226 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory E0221 09:18:27.476519 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory E0221 09:18:27.636908 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory E0221 09:18:27.957431 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory E0221 09:18:28.598254 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory E0221 09:18:29.174160 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory E0221 09:18:29.878647 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory E0221 09:18:32.439685 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory E0221 09:18:35.029781 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory E0221 09:18:37.560789 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.137813409s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1 net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"* === CONT TestNetworkPlugins/group/kubenet net_test.go:198: "kubenet" test finished in 29m4.799378418s, failed=true net_test.go:199: *** TestNetworkPlugins/group/kubenet FAILED at 2022-02-21 09:18:38.560907718 +0000 UTC m=+3211.323227311 helpers_test.go:223: -----------------------post-mortem-------------------------------- helpers_test.go:231: ======> post-mortem[TestNetworkPlugins/group/kubenet]: docker inspect <====== helpers_test.go:232: (dbg) Run: docker inspect kubenet-20220221084933-6550 helpers_test.go:236: (dbg) docker inspect kubenet-20220221084933-6550: -- stdout -- [ { "Id": "42de8a5f623e458c43fad768e7ffc095ac6135e6aaaf18e24be7e05f1fcee301", "Created": "2022-02-21T09:07:35.104979001Z", "Path": "/usr/local/bin/entrypoint", "Args": [ "/sbin/init" ], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 462899, "ExitCode": 0, "Error": "", "StartedAt": "2022-02-21T09:07:35.48618442Z", "FinishedAt": "0001-01-01T00:00:00Z" }, "Image": "sha256:1312ccd2422d964b2df363d606d0c016d6acbc1ddf0211c26a74717f2897dc43", "ResolvConfPath": "/var/lib/docker/containers/42de8a5f623e458c43fad768e7ffc095ac6135e6aaaf18e24be7e05f1fcee301/resolv.conf", "HostnamePath": "/var/lib/docker/containers/42de8a5f623e458c43fad768e7ffc095ac6135e6aaaf18e24be7e05f1fcee301/hostname", "HostsPath": "/var/lib/docker/containers/42de8a5f623e458c43fad768e7ffc095ac6135e6aaaf18e24be7e05f1fcee301/hosts", "LogPath": "/var/lib/docker/containers/42de8a5f623e458c43fad768e7ffc095ac6135e6aaaf18e24be7e05f1fcee301/42de8a5f623e458c43fad768e7ffc095ac6135e6aaaf18e24be7e05f1fcee301-json.log", "Name": "/kubenet-20220221084933-6550", "RestartCount": 0, "Driver": "overlay2", "Platform": "linux", "MountLabel": "", "ProcessLabel": "", "AppArmorProfile": "unconfined", "ExecIDs": null, "HostConfig": { "Binds": [ "/lib/modules:/lib/modules:ro", "kubenet-20220221084933-6550:/var" ], "ContainerIDFile": "", "LogConfig": { "Type": "json-file", "Config": {} }, "NetworkMode": "kubenet-20220221084933-6550", "PortBindings": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ] }, "RestartPolicy": { "Name": "no", "MaximumRetryCount": 0 }, "AutoRemove": false, "VolumeDriver": "", "VolumesFrom": null, "CapAdd": null, "CapDrop": null, "CgroupnsMode": "host", "Dns": [], "DnsOptions": [], "DnsSearch": [], "ExtraHosts": null, "GroupAdd": null, "IpcMode": "private", "Cgroup": "", "Links": null, "OomScoreAdj": 0, "PidMode": "", "Privileged": true, "PublishAllPorts": false, "ReadonlyRootfs": false, "SecurityOpt": [ "seccomp=unconfined", "apparmor=unconfined", "label=disable" ], "Tmpfs": { "/run": "", "/tmp": "" }, "UTSMode": "", "UsernsMode": "", "ShmSize": 67108864, "Runtime": "runc", "ConsoleSize": [ 0, 0 ], "Isolation": "", "CpuShares": 0, "Memory": 0, "NanoCpus": 2000000000, "CgroupParent": "", "BlkioWeight": 0, "BlkioWeightDevice": [], "BlkioDeviceReadBps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteIOps": null, "CpuPeriod": 0, "CpuQuota": 0, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": [ { "PathOnHost": "/dev/fuse", "PathInContainer": "/dev/fuse", "CgroupPermissions": "rwm" } ], "DeviceCgroupRules": null, "DeviceRequests": null, "KernelMemory": 0, "KernelMemoryTCP": 0, "MemoryReservation": 0, "MemorySwap": 0, "MemorySwappiness": null, "OomKillDisable": false, "PidsLimit": null, "Ulimits": null, "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0, "MaskedPaths": null, "ReadonlyPaths": null }, "GraphDriver": { "Data": { "LowerDir": "/var/lib/docker/overlay2/2453528f846de5519b124affa4bc267a8fd283c5a327498d46e2aedb037076cd-init/diff:/var/lib/docker/overlay2/a149cc5f42de5038ba3e5f21bd82a6356aa9456aec11a3690e569db72ca8eaf5/diff:/var/lib/docker/overlay2/fce4ef34d92ff253276822af091ec16578b924486e0e565c7f34b479bdbe1606/diff:/var/lib/docker/overlay2/a38e10ed288ad73a8ce637c6219d5b13318bb72210ce4fb6291dd57d238df611/diff:/var/lib/docker/overlay2/93c18ddce49a4eb2d344b0579d7f58df999d53bb329042f78a5b3f40296f83cd/diff:/var/lib/docker/overlay2/d9ee14262b31fff87924cf913d835b02182e6a8dc3783ecb5f67dbd29faa079c/diff:/var/lib/docker/overlay2/8a0b39fc5ebcb9761ad433102244050fab339d3e027a5a30a7e01d121fe362e4/diff:/var/lib/docker/overlay2/7a6b41deef37b6400fac0c94871f73040d4cae157de2e929b7a0bf96f0e353aa/diff:/var/lib/docker/overlay2/49d913ee587718efcf521a65de0109df5c2c42644f75690beed97a437d623b66/diff:/var/lib/docker/overlay2/89e8fa960313dfb6ece4007576bf5d89e2b0ea68a2bde9de5527647a7df6ff0f/diff:/var/lib/docker/overlay2/0d2632e6ee2f4198f3c7c0d37c31ee03d33864e0131ae85dd3001adecfae53db/diff:/var/lib/docker/overlay2/b28a3b2bbe2e231d00863aaaf6be28d2e5a76197f2be7934f18720a71f2435a8/diff:/var/lib/docker/overlay2/6f88e83ff5bb26bf909248d8df5eacf4d01f72523927fd7433a925c525fbd274/diff:/var/lib/docker/overlay2/c2547d18d2639fa842b0977952fa69cffd7b076e8db357ef0ec62fae9676619b/diff:/var/lib/docker/overlay2/34465ba37bb8eef87906c38cb891a738c54a16dbf47d27fef95cee96f0b62c4f/diff:/var/lib/docker/overlay2/50d3383518bb8e6a01e1cfcae621b8c2238dc54ae94d63606d145633045bc6cd/diff:/var/lib/docker/overlay2/83561f107aed2bf9cb840ff66aceb882d0e39bcced79ef27ce67c17e063ba536/diff:/var/lib/docker/overlay2/bfb7c0c91e0a7649286abe3a6b02df4585a931ffa3a6b1e4846aa1b98eb0d997/diff:/var/lib/docker/overlay2/80589044da151f3d3eed7824d236bc8a19697182abca8610eb34f1c3cae53bf0/diff:/var/lib/docker/overlay2/5a336e956e467ad50830aa4b9ed5b65cb3ef08e6d49a4900acd0440ca3c03c3a/diff:/var/lib/docker/overlay2/f7b3446c0dca80a2053183e360de8f99e3a31e87ad633fcc23f14b2045643596/diff:/var/lib/docker/overlay2/03688a16272f13f172ed8baa181df2a1702e08edbc9c5a4a767265adb2c00b79/diff:/var/lib/docker/overlay2/c5445afaabab58221dbffedde261b3f9948116ccba1344b6f3dd4fdfaf5c98cd/diff:/var/lib/docker/overlay2/704464fd20f23f93d0ad3c3c58103f01fc8a0d76d70bc191f718273e7412bb42/diff:/var/lib/docker/overlay2/a344f9ca1bf1b674ebf560c2a0498913ed90c836e3b4f5b1968022f7f7e54055/diff:/var/lib/docker/overlay2/929f053e6a327a7203e8d8c281f827d02ad922fdabbdfcc5453003daba88cdf4/diff:/var/lib/docker/overlay2/d6a160f0ffe94e5abf99d928a638f6a6220b24c7ba643a438c7557146d240f70/diff:/var/lib/docker/overlay2/c6b58e187a2df10fd55e19e6ad38c20f5ebf5e62617a59b6ce4afe0fc712c0f6/diff:/var/lib/docker/overlay2/fef43deeff878af42cc1f31a1f265b66e642aa166cec02eba5a0c82fb6d70872/diff:/var/lib/docker/overlay2/0d7bbf19182056a05efca89485df54da5fba9903000517fc9e7c2072aa7ba62e/diff:/var/lib/docker/overlay2/5840748964e9ce3890a2fa4fc2e3fb0635426f944b42b3128808a7aca816bf48/diff:/var/lib/docker/overlay2/a94126962e9f346960e3f570b96e0a6e987d255fcba30bf3fd76b227e66275c6/diff:/var/lib/docker/overlay2/211ba88e1557a352838b4896dff8efac70330c1424cff237613c4186994ec037/diff:/var/lib/docker/overlay2/c04720d23411d6693832990e2c79bc529ade609401eaeee8d159f5a6d74d986e/diff:/var/lib/docker/overlay2/70bfe328c89d3a3902a4ba591874fe936425b208ec8e39c2e25d60f21b212300/diff:/var/lib/docker/overlay2/7dcce8bec6dd9561387116c9f724b9101e94e15c0e19c7880ce2b4e96b2921a6/diff:/var/lib/docker/overlay2/81463813805e91be42d9f71b5c63823a80abaaa42d8638034eaddcd66c148310/diff:/var/lib/docker/overlay2/c7dbb1344cf7167b07d58d49902e0db9335bc92a71a324c15dadc14c0aa67c45/diff:/var/lib/docker/overlay2/909df0d6b92b92bcf6e376f0e33d120c53cd24ad0b997d31b5a84f3d1ab89769/diff:/var/lib/docker/overlay2/eacfeaee6ed4aa14e847adad2c6d03439c53944654cc2779cc5c80d239ab323c/diff:/var/lib/docker/overlay2/48c58492169f9043495b11b47439f4dcdeb168525e5bb3a8c130c1ddd5ec882d/diff:/var/lib/docker/overlay2/4520a69f91e766b7694473c2107cc0644394d65da900259eb5933779accdc86d/diff:/var/lib/docker/overlay2/68aafdd300c4b4ebf94976a352959e75fa170653a69e13da38650a447cd5cb2f/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394cfc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff", "MergedDir": "/var/lib/docker/overlay2/2453528f846de5519b124affa4bc267a8fd283c5a327498d46e2aedb037076cd/merged", "UpperDir": "/var/lib/docker/overlay2/2453528f846de5519b124affa4bc267a8fd283c5a327498d46e2aedb037076cd/diff", "WorkDir": "/var/lib/docker/overlay2/2453528f846de5519b124affa4bc267a8fd283c5a327498d46e2aedb037076cd/work" }, "Name": "overlay2" }, "Mounts": [ { "Type": "bind", "Source": "/lib/modules", "Destination": "/lib/modules", "Mode": "ro", "RW": false, "Propagation": "rprivate" }, { "Type": "volume", "Name": "kubenet-20220221084933-6550", "Source": "/var/lib/docker/volumes/kubenet-20220221084933-6550/_data", "Destination": "/var", "Driver": "local", "Mode": "z", "RW": true, "Propagation": "" } ], "Config": { "Hostname": "kubenet-20220221084933-6550", "Domainname": "", "User": "root", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "ExposedPorts": { "22/tcp": {}, "2376/tcp": {}, "32443/tcp": {}, "5000/tcp": {}, "8443/tcp": {} }, "Tty": true, "OpenStdin": false, "StdinOnce": false, "Env": [ "container=docker", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" ], "Cmd": null, "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2", "Volumes": null, "WorkingDir": "", "Entrypoint": [ "/usr/local/bin/entrypoint", "/sbin/init" ], "OnBuild": null, "Labels": { "created_by.minikube.sigs.k8s.io": "true", "mode.minikube.sigs.k8s.io": "kubenet-20220221084933-6550", "name.minikube.sigs.k8s.io": "kubenet-20220221084933-6550", "role.minikube.sigs.k8s.io": "" }, "StopSignal": "SIGRTMIN+3" }, "NetworkSettings": { "Bridge": "", "SandboxID": "2ea7ab169662f2e3ae922211e4f6950f7381d67a66339e43f5c5b1dcb14edbd2", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49399" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49398" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49395" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49397" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49396" } ] }, "SandboxKey": "/var/run/docker/netns/2ea7ab169662", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null, "EndpointID": "", "Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "MacAddress": "", "Networks": { "kubenet-20220221084933-6550": { "IPAMConfig": { "IPv4Address": "192.168.76.2" }, "Links": null, "Aliases": [ "42de8a5f623e", "kubenet-20220221084933-6550" ], "NetworkID": "645548ce5696d8ac0208ac4f08e5263e8d80d8e1b04d7feaec6b203ababf5d53", "EndpointID": "ea840adb457037b7385a6cfe70ee74ea986517b42f5ffaf7dfa1e98ec5039916", "Gateway": "192.168.76.1", "IPAddress": "192.168.76.2", "IPPrefixLen": 24, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:c0:a8:4c:02", "DriverOpts": null } } } } ] -- /stdout -- helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p kubenet-20220221084933-6550 -n kubenet-20220221084933-6550 helpers_test.go:245: <<< TestNetworkPlugins/group/kubenet FAILED: start of post-mortem logs <<< helpers_test.go:246: ======> post-mortem[TestNetworkPlugins/group/kubenet]: minikube logs <====== helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 -p kubenet-20220221084933-6550 logs -n 25 helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p kubenet-20220221084933-6550 logs -n 25: (1.226452639s) helpers_test.go:253: TestNetworkPlugins/group/kubenet logs: -- stdout -- * * ==> Audit <== * |---------|---------------------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|---------------------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| | ssh | -p kindnet-20220221084934-6550 | kindnet-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:40 UTC | Mon, 21 Feb 2022 09:03:40 UTC | | | pgrep -a kubelet | | | | | | | -p | calico-20220221084934-6550 | calico-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:45 UTC | Mon, 21 Feb 2022 09:03:47 UTC | | | logs -n 25 | | | | | | | delete | -p calico-20220221084934-6550 | calico-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:48 UTC | Mon, 21 Feb 2022 09:03:50 UTC | | -p | auto-20220221084933-6550 logs | auto-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:07:20 UTC | Mon, 21 Feb 2022 09:07:22 UTC | | | -n 25 | | | | | | | delete | -p auto-20220221084933-6550 | auto-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:07:22 UTC | Mon, 21 Feb 2022 09:07:25 UTC | | start | -p | enable-default-cni-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:32 UTC | Mon, 21 Feb 2022 09:08:26 UTC | | | enable-default-cni-20220221084933-6550 | | | | | | | | --memory=2048 --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --enable-default-cni=true | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p | enable-default-cni-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:08:26 UTC | Mon, 21 Feb 2022 09:08:27 UTC | | | enable-default-cni-20220221084933-6550 | | | | | | | | pgrep -a kubelet | | | | | | | start | -p bridge-20220221084933-6550 | bridge-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:51 UTC | Mon, 21 Feb 2022 09:08:41 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=bridge --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p bridge-20220221084933-6550 | bridge-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:08:41 UTC | Mon, 21 Feb 2022 09:08:41 UTC | | | pgrep -a kubelet | | | | | | | -p | kindnet-20220221084934-6550 | kindnet-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:09:44 UTC | Mon, 21 Feb 2022 09:09:45 UTC | | | logs -n 25 | | | | | | | delete | -p kindnet-20220221084934-6550 | kindnet-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:09:45 UTC | Mon, 21 Feb 2022 09:09:48 UTC | | start | -p | old-k8s-version-20220221090948-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:09:48 UTC | Mon, 21 Feb 2022 09:11:57 UTC | | | old-k8s-version-20220221090948-6550 | | | | | | | | --memory=2200 --alsologtostderr | | | | | | | | --wait=true --kvm-network=default | | | | | | | | --kvm-qemu-uri=qemu:///system | | | | | | | | --disable-driver-mounts | | | | | | | | --keep-context=false | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | | --kubernetes-version=v1.16.0 | | | | | | | addons | enable metrics-server -p | old-k8s-version-20220221090948-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:12:06 UTC | Mon, 21 Feb 2022 09:12:06 UTC | | | old-k8s-version-20220221090948-6550 | | | | | | | | --images=MetricsServer=k8s.gcr.io/echoserver:1.4 | | | | | | | | --registries=MetricsServer=fake.domain | | | | | | | start | -p kubenet-20220221084933-6550 | kubenet-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:07:25 UTC | Mon, 21 Feb 2022 09:12:15 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --network-plugin=kubenet | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p kubenet-20220221084933-6550 | kubenet-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:12:15 UTC | Mon, 21 Feb 2022 09:12:15 UTC | | | pgrep -a kubelet | | | | | | | stop | -p | old-k8s-version-20220221090948-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:12:06 UTC | Mon, 21 Feb 2022 09:12:17 UTC | | | old-k8s-version-20220221090948-6550 | | | | | | | | --alsologtostderr -v=3 | | | | | | | addons | enable dashboard -p | old-k8s-version-20220221090948-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:12:17 UTC | Mon, 21 Feb 2022 09:12:17 UTC | | | old-k8s-version-20220221090948-6550 | | | | | | | | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 | | | | | | | -p | bridge-20220221084933-6550 | bridge-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:13:35 UTC | Mon, 21 Feb 2022 09:13:36 UTC | | | logs -n 25 | | | | | | | delete | -p bridge-20220221084933-6550 | bridge-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:13:36 UTC | Mon, 21 Feb 2022 09:13:39 UTC | | start | -p no-preload-20220221091339-6550 | no-preload-20220221091339-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:13:39 UTC | Mon, 21 Feb 2022 09:14:33 UTC | | | --memory=2200 --alsologtostderr | | | | | | | | --wait=true --preload=false | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | | --kubernetes-version=v1.23.5-rc.0 | | | | | | | -p | enable-default-cni-20220221084933-6550 | enable-default-cni-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:14:39 UTC | Mon, 21 Feb 2022 09:14:40 UTC | | | logs -n 25 | | | | | | | addons | enable metrics-server -p | no-preload-20220221091339-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:14:42 UTC | Mon, 21 Feb 2022 09:14:43 UTC | | | no-preload-20220221091339-6550 | | | | | | | | --images=MetricsServer=k8s.gcr.io/echoserver:1.4 | | | | | | | | --registries=MetricsServer=fake.domain | | | | | | | delete | -p | enable-default-cni-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:14:40 UTC | Mon, 21 Feb 2022 09:14:43 UTC | | | enable-default-cni-20220221084933-6550 | | | | | | | stop | -p | no-preload-20220221091339-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:14:43 UTC | Mon, 21 Feb 2022 09:14:54 UTC | | | no-preload-20220221091339-6550 | | | | | | | | --alsologtostderr -v=3 | | | | | | | addons | enable dashboard -p | no-preload-20220221091339-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:14:54 UTC | Mon, 21 Feb 2022 09:14:54 UTC | | | no-preload-20220221091339-6550 | | | | | | | | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 | | | | | | |---------|---------------------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| * * ==> Last Start <== * Log file created at: 2022/02/21 09:14:54 Running on machine: ubuntu-20-agent-5 Binary: Built with gc go1.17.7 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0221 09:14:54.373674 497077 out.go:297] Setting OutFile to fd 1 ... I0221 09:14:54.373746 497077 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 09:14:54.373749 497077 out.go:310] Setting ErrFile to fd 2... I0221 09:14:54.373753 497077 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 09:14:54.373852 497077 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/bin I0221 09:14:54.374071 497077 out.go:304] Setting JSON to false I0221 09:14:54.375981 497077 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":3449,"bootTime":1645431446,"procs":953,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"} I0221 09:14:54.376074 497077 start.go:122] virtualization: kvm guest I0221 09:14:54.378621 497077 out.go:176] * [no-preload-20220221091339-6550] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64) I0221 09:14:54.380233 497077 out.go:176] - MINIKUBE_LOCATION=13641 I0221 09:14:54.378810 497077 notify.go:193] Checking for updates... I0221 09:14:54.381954 497077 out.go:176] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true I0221 09:14:54.387076 497077 out.go:176] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 09:14:54.389173 497077 out.go:176] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube I0221 09:14:54.392021 497077 out.go:176] - MINIKUBE_BIN=out/minikube-linux-amd64 I0221 09:14:54.393093 497077 config.go:176] Loaded profile config "no-preload-20220221091339-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5-rc.0 I0221 09:14:54.394047 497077 driver.go:344] Setting default libvirt URI to qemu:///system I0221 09:14:54.455714 497077 docker.go:132] docker version: linux-20.10.12 I0221 09:14:54.455798 497077 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 09:14:54.574019 497077 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:49 SystemTime:2022-02-21 09:14:54.499125125 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 09:14:54.574121 497077 docker.go:237] overlay module found I0221 09:14:54.576244 497077 out.go:176] * Using the docker driver based on existing profile I0221 09:14:54.576277 497077 start.go:281] selected driver: docker I0221 09:14:54.576284 497077 start.go:798] validating driver "docker" against &{Name:no-preload-20220221091339-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5-rc.0 ClusterName:no-preload-20220221091339-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.5-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 09:14:54.576403 497077 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} W0221 09:14:54.576451 497077 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. W0221 09:14:54.576475 497077 out.go:241] ! Your cgroup does not allow setting memory. I0221 09:14:54.577679 497077 out.go:176] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities I0221 09:14:54.578448 497077 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 09:14:54.701284 497077 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:61 SystemTime:2022-02-21 09:14:54.63216937 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} W0221 09:14:54.701445 497077 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. W0221 09:14:54.701470 497077 out.go:241] ! Your cgroup does not allow setting memory. I0221 09:14:54.703498 497077 out.go:176] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities I0221 09:14:54.703624 497077 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] I0221 09:14:54.703651 497077 cni.go:93] Creating CNI manager for "" I0221 09:14:54.703661 497077 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0221 09:14:54.703673 497077 start_flags.go:302] config: {Name:no-preload-20220221091339-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5-rc.0 ClusterName:no-preload-20220221091339-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.5-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 09:14:54.705486 497077 out.go:176] * Starting control plane node no-preload-20220221091339-6550 in cluster no-preload-20220221091339-6550 I0221 09:14:54.705526 497077 cache.go:120] Beginning downloading kic base image for docker with docker I0221 09:14:54.706834 497077 out.go:176] * Pulling base image ... I0221 09:14:54.706871 497077 preload.go:132] Checking if preload exists for k8s version v1.23.5-rc.0 and runtime docker I0221 09:14:54.706968 497077 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon I0221 09:14:54.707179 497077 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/config.json ... I0221 09:14:54.707301 497077 cache.go:107] acquiring lock: {Name:mk9f52e4209628388c7268565716f70b6a94e740 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:14:54.707300 497077 cache.go:107] acquiring lock: {Name:mkae39637d54454769ea96c0928557495a2624a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:14:54.707484 497077 cache.go:107] acquiring lock: {Name:mk8eae83c87e69d4f61d57feebab23b9c618f6ed Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:14:54.707532 497077 cache.go:107] acquiring lock: {Name:mkc848fd9c1e80ffd1414dd8603c19c641b3fcb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:14:54.707582 497077 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7 exists I0221 09:14:54.707598 497077 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 exists I0221 09:14:54.707615 497077 cache.go:96] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.7" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7" took 326.476µs I0221 09:14:54.707636 497077 cache.go:80] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.7 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7 succeeded I0221 09:14:54.707620 497077 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6" took 142.737µs I0221 09:14:54.707654 497077 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 succeeded I0221 09:14:54.707642 497077 cache.go:107] acquiring lock: {Name:mk8cb7540d8a1bd7faccdcc974630f93843749a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:14:54.707669 497077 cache.go:107] acquiring lock: {Name:mk0340c3f1bf4216c7deeea4078501a3da4b3533 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:14:54.707679 497077 cache.go:107] acquiring lock: {Name:mk048af2cde148e8a512f7653817cea4bb1a47e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:14:54.707701 497077 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists I0221 09:14:54.707723 497077 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.5-rc.0 exists I0221 09:14:54.707739 497077 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.5-rc.0 exists I0221 09:14:54.707741 497077 cache.go:107] acquiring lock: {Name:mkd0cd2ae3afc8e39e716bbcd5f1e196bdbc0e1b Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:14:54.707764 497077 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.23.5-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.5-rc.0" took 86.028µs I0221 09:14:54.707781 497077 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.23.5-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.5-rc.0 succeeded I0221 09:14:54.707765 497077 cache.go:107] acquiring lock: {Name:mkf4838fe0f0754a09f1960b33e83e9fd73716a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:14:54.707782 497077 cache.go:107] acquiring lock: {Name:mk4db3a52d1f4fba9dc9223f3164cb8742f00f2f Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:14:54.707715 497077 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 75.715µs I0221 09:14:54.707806 497077 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded I0221 09:14:54.707799 497077 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1 exists I0221 09:14:54.707823 497077 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.5-rc.0 exists I0221 09:14:54.707829 497077 cache.go:96] cache image "docker.io/kubernetesui/dashboard:v2.3.1" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1" took 90.577µs I0221 09:14:54.707841 497077 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.23.5-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.5-rc.0" took 78.398µs I0221 09:14:54.707851 497077 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.23.5-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.5-rc.0 succeeded I0221 09:14:54.707881 497077 cache.go:80] save to tar file docker.io/kubernetesui/dashboard:v2.3.1 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1 succeeded I0221 09:14:54.707744 497077 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.23.5-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.5-rc.0" took 77.49µs I0221 09:14:54.707899 497077 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.23.5-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.5-rc.0 succeeded I0221 09:14:54.707636 497077 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 exists I0221 09:14:54.707919 497077 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.1-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0" took 427.987µs I0221 09:14:54.707938 497077 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.1-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 succeeded I0221 09:14:54.707837 497077 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 exists I0221 09:14:54.707962 497077 cache.go:96] cache image "k8s.gcr.io/pause:3.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6" took 180.957µs I0221 09:14:54.707545 497077 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.5-rc.0 exists I0221 09:14:54.707977 497077 cache.go:80] save to tar file k8s.gcr.io/pause:3.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 succeeded I0221 09:14:54.707996 497077 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.23.5-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.5-rc.0" took 707.222µs I0221 09:14:54.708010 497077 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.23.5-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.5-rc.0 succeeded I0221 09:14:54.708017 497077 cache.go:87] Successfully saved all images to host disk. I0221 09:14:54.757072 497077 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon, skipping pull I0221 09:14:54.757120 497077 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 exists in daemon, skipping load I0221 09:14:54.757137 497077 cache.go:208] Successfully downloaded all kic artifacts I0221 09:14:54.757204 497077 start.go:313] acquiring machines lock for no-preload-20220221091339-6550: {Name:mk3240de6571e839de8f8161d174b6e05c7d8988 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:14:54.757325 497077 start.go:317] acquired machines lock for "no-preload-20220221091339-6550" in 98.473µs I0221 09:14:54.757349 497077 start.go:93] Skipping create...Using existing machine configuration I0221 09:14:54.757361 497077 fix.go:55] fixHost starting: I0221 09:14:54.757661 497077 cli_runner.go:133] Run: docker container inspect no-preload-20220221091339-6550 --format={{.State.Status}} I0221 09:14:54.793061 497077 fix.go:108] recreateIfNeeded on no-preload-20220221091339-6550: state=Stopped err= W0221 09:14:54.793108 497077 fix.go:134] unexpected machine state, will restart: I0221 09:14:54.065834 495766 cli_runner.go:133] Run: docker container inspect embed-certs-20220221091443-6550 --format={{.State.Running}} I0221 09:14:54.108359 495766 cli_runner.go:133] Run: docker container inspect embed-certs-20220221091443-6550 --format={{.State.Status}} I0221 09:14:54.147246 495766 cli_runner.go:133] Run: docker exec embed-certs-20220221091443-6550 stat /var/lib/dpkg/alternatives/iptables I0221 09:14:54.252735 495766 oci.go:281] the created container "embed-certs-20220221091443-6550" has a running status. I0221 09:14:54.252787 495766 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/embed-certs-20220221091443-6550/id_rsa... I0221 09:14:54.394587 495766 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/embed-certs-20220221091443-6550/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0221 09:14:54.497420 495766 cli_runner.go:133] Run: docker container inspect embed-certs-20220221091443-6550 --format={{.State.Status}} I0221 09:14:54.538065 495766 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0221 09:14:54.538091 495766 kic_runner.go:114] Args: [docker exec --privileged embed-certs-20220221091443-6550 chown docker:docker /home/docker/.ssh/authorized_keys] I0221 09:14:54.646684 495766 cli_runner.go:133] Run: docker container inspect embed-certs-20220221091443-6550 --format={{.State.Status}} I0221 09:14:54.683698 495766 machine.go:88] provisioning docker machine ... I0221 09:14:54.683738 495766 ubuntu.go:169] provisioning hostname "embed-certs-20220221091443-6550" I0221 09:14:54.683812 495766 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220221091443-6550 I0221 09:14:54.721118 495766 main.go:130] libmachine: Using SSH client type: native I0221 09:14:54.721290 495766 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49419 } I0221 09:14:54.721306 495766 main.go:130] libmachine: About to run SSH command: sudo hostname embed-certs-20220221091443-6550 && echo "embed-certs-20220221091443-6550" | sudo tee /etc/hostname I0221 09:14:54.863859 495766 main.go:130] libmachine: SSH cmd err, output: : embed-certs-20220221091443-6550 I0221 09:14:54.863929 495766 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220221091443-6550 I0221 09:14:54.901280 495766 main.go:130] libmachine: Using SSH client type: native I0221 09:14:54.901415 495766 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49419 } I0221 09:14:54.901436 495766 main.go:130] libmachine: About to run SSH command: if ! grep -xq '.*\sembed-certs-20220221091443-6550' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220221091443-6550/g' /etc/hosts; else echo '127.0.1.1 embed-certs-20220221091443-6550' | sudo tee -a /etc/hosts; fi fi I0221 09:14:55.027077 495766 main.go:130] libmachine: SSH cmd err, output: : I0221 09:14:55.027115 495766 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube} I0221 09:14:55.027158 495766 ubuntu.go:177] setting up certificates I0221 09:14:55.027175 495766 provision.go:83] configureAuth start I0221 09:14:55.027236 495766 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220221091443-6550 I0221 09:14:55.064958 495766 provision.go:138] copyHostCerts I0221 09:14:55.065021 495766 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem, removing ... I0221 09:14:55.065036 495766 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem I0221 09:14:55.065109 495766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem (1082 bytes) I0221 09:14:55.065213 495766 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem, removing ... I0221 09:14:55.065231 495766 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem I0221 09:14:55.065265 495766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem (1123 bytes) I0221 09:14:55.065329 495766 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem, removing ... I0221 09:14:55.065341 495766 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem I0221 09:14:55.065370 495766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem (1675 bytes) I0221 09:14:55.065422 495766 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220221091443-6550 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220221091443-6550] I0221 09:14:55.190131 495766 provision.go:172] copyRemoteCerts I0221 09:14:55.190182 495766 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0221 09:14:55.190228 495766 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220221091443-6550 I0221 09:14:55.229697 495766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49419 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/embed-certs-20220221091443-6550/id_rsa Username:docker} I0221 09:14:55.322901 495766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes) I0221 09:14:55.342173 495766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes) I0221 09:14:55.361624 495766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0221 09:14:55.385423 495766 provision.go:86] duration metric: configureAuth took 358.231938ms I0221 09:14:55.385454 495766 ubuntu.go:193] setting minikube options for container-runtime I0221 09:14:55.385648 495766 config.go:176] Loaded profile config "embed-certs-20220221091443-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:14:55.385706 495766 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220221091443-6550 I0221 09:14:55.422978 495766 main.go:130] libmachine: Using SSH client type: native I0221 09:14:55.423143 495766 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49419 } I0221 09:14:55.423160 495766 main.go:130] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0221 09:14:55.551351 495766 main.go:130] libmachine: SSH cmd err, output: : overlay I0221 09:14:55.551374 495766 ubuntu.go:71] root file system type: overlay I0221 09:14:55.551603 495766 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0221 09:14:55.551680 495766 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220221091443-6550 I0221 09:14:55.592738 495766 main.go:130] libmachine: Using SSH client type: native I0221 09:14:55.592917 495766 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49419 } I0221 09:14:55.592983 495766 main.go:130] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0221 09:14:55.728704 495766 main.go:130] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0221 09:14:55.728787 495766 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220221091443-6550 I0221 09:14:55.763665 495766 main.go:130] libmachine: Using SSH client type: native I0221 09:14:55.763863 495766 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49419 } I0221 09:14:55.763893 495766 main.go:130] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0221 09:14:56.422335 495766 main.go:130] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2021-12-13 11:43:42.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2022-02-21 09:14:55.724332118 +0000 @@ -1,30 +1,32 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com +BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target -Requires=docker.socket containerd.service +Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always - -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 +Restart=on-failure -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +34,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0221 09:14:56.422379 495766 machine.go:91] provisioned docker machine in 1.738656889s I0221 09:14:56.422390 495766 client.go:171] LocalClient.Create took 12.24132238s I0221 09:14:56.422400 495766 start.go:168] duration metric: libmachine.API.Create for "embed-certs-20220221091443-6550" took 12.241377204s I0221 09:14:56.422410 495766 start.go:267] post-start starting for "embed-certs-20220221091443-6550" (driver="docker") I0221 09:14:56.422415 495766 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0221 09:14:56.422480 495766 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0221 09:14:56.422542 495766 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220221091443-6550 I0221 09:14:56.456066 495766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49419 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/embed-certs-20220221091443-6550/id_rsa Username:docker} I0221 09:14:56.542630 495766 ssh_runner.go:195] Run: cat /etc/os-release I0221 09:14:56.545460 495766 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0221 09:14:56.545480 495766 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0221 09:14:56.545491 495766 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0221 09:14:56.545497 495766 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0221 09:14:56.545508 495766 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/addons for local assets ... I0221 09:14:56.545569 495766 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files for local assets ... I0221 09:14:56.545648 495766 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem -> 65502.pem in /etc/ssl/certs I0221 09:14:56.545743 495766 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs I0221 09:14:56.552603 495766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /etc/ssl/certs/65502.pem (1708 bytes) I0221 09:14:56.569802 495766 start.go:270] post-start completed in 147.380893ms I0221 09:14:56.570107 495766 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220221091443-6550 I0221 09:14:56.602861 495766 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/config.json ... I0221 09:14:56.603136 495766 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0221 09:14:56.603185 495766 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220221091443-6550 I0221 09:14:56.636423 495766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49419 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/embed-certs-20220221091443-6550/id_rsa Username:docker} I0221 09:14:56.719646 495766 start.go:129] duration metric: createHost completed in 12.541291945s I0221 09:14:56.719670 495766 start.go:80] releasing machines lock for "embed-certs-20220221091443-6550", held for 12.541422547s I0221 09:14:56.719749 495766 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220221091443-6550 I0221 09:14:56.755073 495766 ssh_runner.go:195] Run: systemctl --version I0221 09:14:56.755120 495766 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220221091443-6550 I0221 09:14:56.755168 495766 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/ I0221 09:14:56.755217 495766 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220221091443-6550 I0221 09:14:56.790615 495766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49419 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/embed-certs-20220221091443-6550/id_rsa Username:docker} I0221 09:14:56.792442 495766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49419 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/embed-certs-20220221091443-6550/id_rsa Username:docker} I0221 09:14:57.020347 495766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0221 09:14:57.030464 495766 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 09:14:57.041630 495766 cruntime.go:272] skipping containerd shutdown because we are bound to it I0221 09:14:57.041684 495766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0221 09:14:57.051394 495766 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0221 09:14:57.064671 495766 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0221 09:14:57.148196 495766 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0221 09:14:57.232221 495766 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 09:14:57.242443 495766 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0221 09:14:57.322703 495766 ssh_runner.go:195] Run: sudo systemctl start docker I0221 09:14:57.332494 495766 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 09:14:57.375245 495766 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 09:14:57.417612 495766 out.go:203] * Preparing Kubernetes v1.23.4 on Docker 20.10.12 ... I0221 09:14:57.417696 495766 cli_runner.go:133] Run: docker network inspect embed-certs-20220221091443-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 09:14:57.450706 495766 ssh_runner.go:195] Run: grep 192.168.58.1 host.minikube.internal$ /etc/hosts I0221 09:14:57.454061 495766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 09:14:53.366557 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:14:55.367507 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:14:57.367593 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:14:57.465653 495766 out.go:176] - kubelet.housekeeping-interval=5m I0221 09:14:57.465719 495766 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 09:14:57.465769 495766 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0221 09:14:57.499249 495766 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.4 k8s.gcr.io/kube-proxy:v1.23.4 k8s.gcr.io/kube-scheduler:v1.23.4 k8s.gcr.io/kube-controller-manager:v1.23.4 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0221 09:14:57.499329 495766 docker.go:537] Images already preloaded, skipping extraction I0221 09:14:57.499379 495766 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0221 09:14:57.534216 495766 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.4 k8s.gcr.io/kube-proxy:v1.23.4 k8s.gcr.io/kube-controller-manager:v1.23.4 k8s.gcr.io/kube-scheduler:v1.23.4 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0221 09:14:57.534243 495766 cache_images.go:84] Images are preloaded, skipping loading I0221 09:14:57.534282 495766 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0221 09:14:57.620204 495766 cni.go:93] Creating CNI manager for "" I0221 09:14:57.620227 495766 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0221 09:14:57.620235 495766 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0221 09:14:57.620247 495766 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220221091443-6550 NodeName:embed-certs-20220221091443-6550 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0221 09:14:57.620360 495766 kubeadm.go:162] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.58.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "embed-certs-20220221091443-6550" kubeletExtraArgs: node-ip: 192.168.58.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.58.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.23.4 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0221 09:14:57.620435 495766 kubeadm.go:936] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.23.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=embed-certs-20220221091443-6550 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2 [Install] config: {KubernetesVersion:v1.23.4 ClusterName:embed-certs-20220221091443-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0221 09:14:57.620483 495766 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.4 I0221 09:14:57.627544 495766 binaries.go:44] Found k8s binaries, skipping transfer I0221 09:14:57.627599 495766 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0221 09:14:57.634610 495766 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (384 bytes) I0221 09:14:57.647700 495766 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0221 09:14:57.660906 495766 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2053 bytes) I0221 09:14:57.674055 495766 ssh_runner.go:195] Run: grep 192.168.58.2 control-plane.minikube.internal$ /etc/hosts I0221 09:14:57.677021 495766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 09:14:57.686472 495766 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550 for IP: 192.168.58.2 I0221 09:14:57.686582 495766 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key I0221 09:14:57.686626 495766 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key I0221 09:14:57.686684 495766 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/client.key I0221 09:14:57.686698 495766 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/client.crt with IP's: [] I0221 09:14:57.788229 495766 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/client.crt ... I0221 09:14:57.788262 495766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/client.crt: {Name:mkec8981966785f7e07560a482d7402b98e81ab6 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:14:57.788468 495766 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/client.key ... I0221 09:14:57.788484 495766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/client.key: {Name:mkffe615b6963103dbeccb0665b05a85c8805e77 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:14:57.788566 495766 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/apiserver.key.cee25041 I0221 09:14:57.788581 495766 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1] I0221 09:14:57.856333 495766 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/apiserver.crt.cee25041 ... I0221 09:14:57.856373 495766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/apiserver.crt.cee25041: {Name:mk61adee2b3ddd19cca3a47f6f629fd31c40a64e Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:14:57.856592 495766 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/apiserver.key.cee25041 ... I0221 09:14:57.856609 495766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/apiserver.key.cee25041: {Name:mkb6619dc2a52f5977bfa969c6373ef50a0410aa Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:14:57.856711 495766 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/apiserver.crt I0221 09:14:57.856771 495766 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/apiserver.key I0221 09:14:57.856815 495766 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/proxy-client.key I0221 09:14:57.856829 495766 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/proxy-client.crt with IP's: [] I0221 09:14:57.968944 495766 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/proxy-client.crt ... I0221 09:14:57.968975 495766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/proxy-client.crt: {Name:mk1a6a4f1101db5f82e9a1d9b328dd92800d4dfb Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:14:57.969176 495766 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/proxy-client.key ... I0221 09:14:57.969193 495766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/proxy-client.key: {Name:mk1192e141df4adaca670a33ef20c34eebac4456 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:14:57.969374 495766 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem (1338 bytes) W0221 09:14:57.969413 495766 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550_empty.pem, impossibly tiny 0 bytes I0221 09:14:57.969427 495766 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem (1679 bytes) I0221 09:14:57.969452 495766 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem (1082 bytes) I0221 09:14:57.969477 495766 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem (1123 bytes) I0221 09:14:57.969509 495766 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem (1675 bytes) I0221 09:14:57.969549 495766 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem (1708 bytes) I0221 09:14:57.970447 495766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0221 09:14:57.988891 495766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0221 09:14:58.006496 495766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0221 09:14:58.024165 495766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I0221 09:14:58.041995 495766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0221 09:14:58.060449 495766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0221 09:14:58.078267 495766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0221 09:14:58.095860 495766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0221 09:14:58.113569 495766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0221 09:14:58.131351 495766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem --> /usr/share/ca-certificates/6550.pem (1338 bytes) I0221 09:14:58.149204 495766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /usr/share/ca-certificates/65502.pem (1708 bytes) I0221 09:14:58.167017 495766 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0221 09:14:58.179866 495766 ssh_runner.go:195] Run: openssl version I0221 09:14:58.184620 495766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65502.pem && ln -fs /usr/share/ca-certificates/65502.pem /etc/ssl/certs/65502.pem" I0221 09:14:58.192167 495766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65502.pem I0221 09:14:58.195132 495766 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Feb 21 08:30 /usr/share/ca-certificates/65502.pem I0221 09:14:58.195172 495766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65502.pem I0221 09:14:58.199966 495766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65502.pem /etc/ssl/certs/3ec20f2e.0" I0221 09:14:58.207367 495766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0221 09:14:58.214716 495766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0221 09:14:58.217752 495766 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb 21 08:26 /usr/share/ca-certificates/minikubeCA.pem I0221 09:14:58.217791 495766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0221 09:14:58.222703 495766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0221 09:14:58.230623 495766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6550.pem && ln -fs /usr/share/ca-certificates/6550.pem /etc/ssl/certs/6550.pem" I0221 09:14:58.238011 495766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6550.pem I0221 09:14:58.241207 495766 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Feb 21 08:30 /usr/share/ca-certificates/6550.pem I0221 09:14:58.241262 495766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6550.pem I0221 09:14:58.246138 495766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6550.pem /etc/ssl/certs/51391683.0" I0221 09:14:58.254096 495766 kubeadm.go:391] StartCluster: {Name:embed-certs-20220221091443-6550 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:embed-certs-20220221091443-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 09:14:58.254217 495766 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0221 09:14:58.286449 495766 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0221 09:14:58.293703 495766 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0221 09:14:58.300962 495766 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver I0221 09:14:58.301022 495766 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0221 09:14:58.307987 495766 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0221 09:14:58.308037 495766 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0221 09:14:54.795416 497077 out.go:176] * Restarting existing docker container for "no-preload-20220221091339-6550" ... I0221 09:14:54.795480 497077 cli_runner.go:133] Run: docker start no-preload-20220221091339-6550 I0221 09:14:55.189786 497077 cli_runner.go:133] Run: docker container inspect no-preload-20220221091339-6550 --format={{.State.Status}} I0221 09:14:55.229409 497077 kic.go:420] container "no-preload-20220221091339-6550" state is running. I0221 09:14:55.229776 497077 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220221091339-6550 I0221 09:14:55.265712 497077 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/config.json ... I0221 09:14:55.265927 497077 machine.go:88] provisioning docker machine ... I0221 09:14:55.265950 497077 ubuntu.go:169] provisioning hostname "no-preload-20220221091339-6550" I0221 09:14:55.265997 497077 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:14:55.300719 497077 main.go:130] libmachine: Using SSH client type: native I0221 09:14:55.300947 497077 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49424 } I0221 09:14:55.300962 497077 main.go:130] libmachine: About to run SSH command: sudo hostname no-preload-20220221091339-6550 && echo "no-preload-20220221091339-6550" | sudo tee /etc/hostname I0221 09:14:55.301593 497077 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42292->127.0.0.1:49424: read: connection reset by peer I0221 09:14:58.437161 497077 main.go:130] libmachine: SSH cmd err, output: : no-preload-20220221091339-6550 I0221 09:14:58.437240 497077 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:14:58.471291 497077 main.go:130] libmachine: Using SSH client type: native I0221 09:14:58.471422 497077 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49424 } I0221 09:14:58.471446 497077 main.go:130] libmachine: About to run SSH command: if ! grep -xq '.*\sno-preload-20220221091339-6550' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20220221091339-6550/g' /etc/hosts; else echo '127.0.1.1 no-preload-20220221091339-6550' | sudo tee -a /etc/hosts; fi fi I0221 09:14:58.599291 497077 main.go:130] libmachine: SSH cmd err, output: : I0221 09:14:58.599327 497077 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube} I0221 09:14:58.599356 497077 ubuntu.go:177] setting up certificates I0221 09:14:58.599374 497077 provision.go:83] configureAuth start I0221 09:14:58.599432 497077 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220221091339-6550 I0221 09:14:58.635416 497077 provision.go:138] copyHostCerts I0221 09:14:58.635490 497077 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem, removing ... I0221 09:14:58.635505 497077 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem I0221 09:14:58.635587 497077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem (1082 bytes) I0221 09:14:58.635698 497077 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem, removing ... I0221 09:14:58.635723 497077 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem I0221 09:14:58.635763 497077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem (1123 bytes) I0221 09:14:58.635848 497077 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem, removing ... I0221 09:14:58.635861 497077 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem I0221 09:14:58.635891 497077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem (1675 bytes) I0221 09:14:58.636017 497077 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem org=jenkins.no-preload-20220221091339-6550 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20220221091339-6550] I0221 09:14:58.819070 497077 provision.go:172] copyRemoteCerts I0221 09:14:58.819127 497077 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0221 09:14:58.819194 497077 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:14:58.854906 497077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49424 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/no-preload-20220221091339-6550/id_rsa Username:docker} I0221 09:14:58.942893 497077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes) I0221 09:14:58.960791 497077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes) I0221 09:14:58.978476 497077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0221 09:14:58.996794 497077 provision.go:86] duration metric: configureAuth took 397.404469ms I0221 09:14:58.996825 497077 ubuntu.go:193] setting minikube options for container-runtime I0221 09:14:58.997032 497077 config.go:176] Loaded profile config "no-preload-20220221091339-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5-rc.0 I0221 09:14:58.997090 497077 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:14:59.034468 497077 main.go:130] libmachine: Using SSH client type: native I0221 09:14:59.034682 497077 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49424 } I0221 09:14:59.034700 497077 main.go:130] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0221 09:14:59.155226 497077 main.go:130] libmachine: SSH cmd err, output: : overlay I0221 09:14:59.155248 497077 ubuntu.go:71] root file system type: overlay I0221 09:14:59.155392 497077 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0221 09:14:59.155444 497077 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:14:59.192388 497077 main.go:130] libmachine: Using SSH client type: native I0221 09:14:59.192685 497077 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49424 } I0221 09:14:59.192751 497077 main.go:130] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0221 09:14:59.324197 497077 main.go:130] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0221 09:14:59.324270 497077 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:14:59.359849 497077 main.go:130] libmachine: Using SSH client type: native I0221 09:14:59.360033 497077 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49424 } I0221 09:14:59.360060 497077 main.go:130] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0221 09:14:59.486930 497077 main.go:130] libmachine: SSH cmd err, output: : I0221 09:14:59.486959 497077 machine.go:91] provisioned docker machine in 4.221017657s I0221 09:14:59.486970 497077 start.go:267] post-start starting for "no-preload-20220221091339-6550" (driver="docker") I0221 09:14:59.486977 497077 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0221 09:14:59.487048 497077 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0221 09:14:59.487084 497077 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:14:59.521395 497077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49424 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/no-preload-20220221091339-6550/id_rsa Username:docker} I0221 09:14:59.610807 497077 ssh_runner.go:195] Run: cat /etc/os-release I0221 09:14:59.613656 497077 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0221 09:14:59.613682 497077 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0221 09:14:59.613689 497077 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0221 09:14:59.613693 497077 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0221 09:14:59.613702 497077 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/addons for local assets ... I0221 09:14:59.613745 497077 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files for local assets ... I0221 09:14:59.613805 497077 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem -> 65502.pem in /etc/ssl/certs I0221 09:14:59.613869 497077 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs I0221 09:14:59.620854 497077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /etc/ssl/certs/65502.pem (1708 bytes) I0221 09:14:59.639388 497077 start.go:270] post-start completed in 152.406038ms I0221 09:14:59.639459 497077 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0221 09:14:59.639511 497077 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:14:59.673472 497077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49424 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/no-preload-20220221091339-6550/id_rsa Username:docker} I0221 09:14:59.759445 497077 fix.go:57] fixHost completed within 5.00207894s I0221 09:14:59.759478 497077 start.go:80] releasing machines lock for "no-preload-20220221091339-6550", held for 5.002135289s I0221 09:14:59.759569 497077 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220221091339-6550 I0221 09:14:59.794168 497077 ssh_runner.go:195] Run: systemctl --version I0221 09:14:59.794213 497077 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:14:59.794261 497077 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/ I0221 09:14:59.794323 497077 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:14:59.830266 497077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49424 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/no-preload-20220221091339-6550/id_rsa Username:docker} I0221 09:14:59.830934 497077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49424 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/no-preload-20220221091339-6550/id_rsa Username:docker} I0221 09:15:00.059614 497077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0221 09:15:00.072126 497077 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 09:15:00.081745 497077 cruntime.go:272] skipping containerd shutdown because we are bound to it I0221 09:15:00.081807 497077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0221 09:15:00.091414 497077 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0221 09:15:00.104576 497077 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0221 09:15:00.185593 497077 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0221 09:15:00.264404 497077 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 09:15:00.274607 497077 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0221 09:15:00.356677 497077 ssh_runner.go:195] Run: sudo systemctl start docker I0221 09:15:00.367220 497077 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 09:15:00.408646 497077 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 09:15:00.453346 497077 out.go:203] * Preparing Kubernetes v1.23.5-rc.0 on Docker 20.10.12 ... I0221 09:15:00.453433 497077 cli_runner.go:133] Run: docker network inspect no-preload-20220221091339-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 09:15:00.490848 497077 ssh_runner.go:195] Run: grep 192.168.67.1 host.minikube.internal$ /etc/hosts I0221 09:15:00.494266 497077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 09:14:59.367833 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:01.866890 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:00.505904 497077 out.go:176] - kubelet.housekeeping-interval=5m I0221 09:15:00.505987 497077 preload.go:132] Checking if preload exists for k8s version v1.23.5-rc.0 and runtime docker I0221 09:15:00.506034 497077 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0221 09:15:00.542443 497077 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.5-rc.0 k8s.gcr.io/kube-proxy:v1.23.5-rc.0 k8s.gcr.io/kube-controller-manager:v1.23.5-rc.0 k8s.gcr.io/kube-scheduler:v1.23.5-rc.0 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 gcr.io/k8s-minikube/busybox:1.28.4-glibc -- /stdout -- I0221 09:15:00.542468 497077 cache_images.go:84] Images are preloaded, skipping loading I0221 09:15:00.542516 497077 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0221 09:15:00.629839 497077 cni.go:93] Creating CNI manager for "" I0221 09:15:00.629866 497077 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0221 09:15:00.629874 497077 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0221 09:15:00.629885 497077 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.5-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20220221091339-6550 NodeName:no-preload-20220221091339-6550 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0221 09:15:00.630008 497077 kubeadm.go:162] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.67.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "no-preload-20220221091339-6550" kubeletExtraArgs: node-ip: 192.168.67.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.67.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.23.5-rc.0 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0221 09:15:00.630090 497077 kubeadm.go:936] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.23.5-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=no-preload-20220221091339-6550 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 [Install] config: {KubernetesVersion:v1.23.5-rc.0 ClusterName:no-preload-20220221091339-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0221 09:15:00.630139 497077 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5-rc.0 I0221 09:15:00.637685 497077 binaries.go:44] Found k8s binaries, skipping transfer I0221 09:15:00.637764 497077 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0221 09:15:00.644789 497077 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes) I0221 09:15:00.657982 497077 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes) I0221 09:15:00.670742 497077 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2057 bytes) I0221 09:15:00.684208 497077 ssh_runner.go:195] Run: grep 192.168.67.2 control-plane.minikube.internal$ /etc/hosts I0221 09:15:00.687208 497077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 09:15:00.696515 497077 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550 for IP: 192.168.67.2 I0221 09:15:00.696618 497077 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key I0221 09:15:00.696661 497077 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key I0221 09:15:00.696755 497077 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/client.key I0221 09:15:00.696832 497077 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/apiserver.key.c7fa3a9e I0221 09:15:00.696886 497077 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/proxy-client.key I0221 09:15:00.697009 497077 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem (1338 bytes) W0221 09:15:00.697050 497077 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550_empty.pem, impossibly tiny 0 bytes I0221 09:15:00.697065 497077 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem (1679 bytes) I0221 09:15:00.697098 497077 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem (1082 bytes) I0221 09:15:00.697131 497077 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem (1123 bytes) I0221 09:15:00.697164 497077 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem (1675 bytes) I0221 09:15:00.697218 497077 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem (1708 bytes) I0221 09:15:00.698143 497077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0221 09:15:00.715811 497077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0221 09:15:00.733265 497077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0221 09:15:00.750977 497077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I0221 09:15:00.769398 497077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0221 09:15:00.788563 497077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0221 09:15:00.806153 497077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0221 09:15:00.823360 497077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0221 09:15:00.841202 497077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0221 09:15:00.858966 497077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem --> /usr/share/ca-certificates/6550.pem (1338 bytes) I0221 09:15:00.877291 497077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /usr/share/ca-certificates/65502.pem (1708 bytes) I0221 09:15:00.894966 497077 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0221 09:15:00.907784 497077 ssh_runner.go:195] Run: openssl version I0221 09:15:00.912646 497077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0221 09:15:00.920199 497077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0221 09:15:00.923468 497077 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb 21 08:26 /usr/share/ca-certificates/minikubeCA.pem I0221 09:15:00.923522 497077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0221 09:15:00.928412 497077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0221 09:15:00.935630 497077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6550.pem && ln -fs /usr/share/ca-certificates/6550.pem /etc/ssl/certs/6550.pem" I0221 09:15:00.943451 497077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6550.pem I0221 09:15:00.946441 497077 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Feb 21 08:30 /usr/share/ca-certificates/6550.pem I0221 09:15:00.946486 497077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6550.pem I0221 09:15:00.951550 497077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6550.pem /etc/ssl/certs/51391683.0" I0221 09:15:00.958531 497077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65502.pem && ln -fs /usr/share/ca-certificates/65502.pem /etc/ssl/certs/65502.pem" I0221 09:15:00.966088 497077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65502.pem I0221 09:15:00.969339 497077 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Feb 21 08:30 /usr/share/ca-certificates/65502.pem I0221 09:15:00.969381 497077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65502.pem I0221 09:15:00.974253 497077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65502.pem /etc/ssl/certs/3ec20f2e.0" I0221 09:15:00.981331 497077 kubeadm.go:391] StartCluster: {Name:no-preload-20220221091339-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5-rc.0 ClusterName:no-preload-20220221091339-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.5-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 09:15:00.981480 497077 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0221 09:15:01.015677 497077 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0221 09:15:01.023263 497077 kubeadm.go:402] found existing configuration files, will attempt cluster restart I0221 09:15:01.023291 497077 kubeadm.go:601] restartCluster start I0221 09:15:01.023336 497077 ssh_runner.go:195] Run: sudo test -d /data/minikube I0221 09:15:01.030275 497077 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1 stdout: stderr: I0221 09:15:01.031227 497077 kubeconfig.go:116] verify returned: extract IP: "no-preload-20220221091339-6550" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 09:15:01.031637 497077 kubeconfig.go:127] "no-preload-20220221091339-6550" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig - will repair! I0221 09:15:01.032422 497077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig: {Name:mkbaf7b3f4ffbe8a9d57707d423380523fa909e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:15:01.035063 497077 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new I0221 09:15:01.042293 497077 api_server.go:165] Checking apiserver status ... I0221 09:15:01.042341 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:15:01.057108 497077 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:15:01.257508 497077 api_server.go:165] Checking apiserver status ... I0221 09:15:01.257589 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:15:01.272689 497077 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:15:01.457920 497077 api_server.go:165] Checking apiserver status ... I0221 09:15:01.458008 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:15:01.472380 497077 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:15:01.657667 497077 api_server.go:165] Checking apiserver status ... I0221 09:15:01.657749 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:15:01.671846 497077 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:15:01.858121 497077 api_server.go:165] Checking apiserver status ... I0221 09:15:01.858197 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:15:01.873219 497077 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:15:02.057537 497077 api_server.go:165] Checking apiserver status ... I0221 09:15:02.057621 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:15:02.071822 497077 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:15:02.258142 497077 api_server.go:165] Checking apiserver status ... I0221 09:15:02.258214 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:15:02.272579 497077 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:15:02.457275 497077 api_server.go:165] Checking apiserver status ... I0221 09:15:02.457349 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:15:02.471283 497077 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:15:02.657420 497077 api_server.go:165] Checking apiserver status ... I0221 09:15:02.657492 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:15:02.673234 497077 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:15:02.857338 497077 api_server.go:165] Checking apiserver status ... I0221 09:15:02.857406 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:15:02.872086 497077 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:15:03.057333 497077 api_server.go:165] Checking apiserver status ... I0221 09:15:03.057406 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:15:03.072150 497077 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:15:03.257375 497077 api_server.go:165] Checking apiserver status ... I0221 09:15:03.257455 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:15:03.271764 497077 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:15:03.458080 497077 api_server.go:165] Checking apiserver status ... I0221 09:15:03.458143 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:15:03.472368 497077 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:15:03.657605 497077 api_server.go:165] Checking apiserver status ... I0221 09:15:03.657670 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:15:03.671895 497077 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:15:03.857252 497077 api_server.go:165] Checking apiserver status ... I0221 09:15:03.857342 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:15:03.872788 497077 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:15:04.058092 497077 api_server.go:165] Checking apiserver status ... I0221 09:15:04.058182 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:15:04.073466 497077 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:15:04.073487 497077 api_server.go:165] Checking apiserver status ... I0221 09:15:04.073535 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:15:04.087811 497077 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:15:04.087838 497077 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition I0221 09:15:04.087845 497077 kubeadm.go:1067] stopping kube-system containers ... I0221 09:15:04.087896 497077 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0221 09:15:04.127206 497077 docker.go:438] Stopping containers: [5d15ca256109 347a4215d0ae c0c4b379d5e5 f7181f1f8daf 7a5f5b44b56f 8276accbaf09 8a563c0a42c4 f9f5c7cf75f7 ae45a8000b2b b955dacc6170 326ecf4c809c f643ab14017c 3a10ec39e5a4 8404008f7aea 1eba7820624f] I0221 09:15:04.127280 497077 ssh_runner.go:195] Run: docker stop 5d15ca256109 347a4215d0ae c0c4b379d5e5 f7181f1f8daf 7a5f5b44b56f 8276accbaf09 8a563c0a42c4 f9f5c7cf75f7 ae45a8000b2b b955dacc6170 326ecf4c809c f643ab14017c 3a10ec39e5a4 8404008f7aea 1eba7820624f I0221 09:15:04.165650 497077 ssh_runner.go:195] Run: sudo systemctl stop kubelet I0221 09:15:04.176836 497077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0221 09:15:04.184036 497077 kubeadm.go:155] found existing configuration files: -rw------- 1 root root 5643 Feb 21 09:14 /etc/kubernetes/admin.conf -rw------- 1 root root 5652 Feb 21 09:14 /etc/kubernetes/controller-manager.conf -rw------- 1 root root 2059 Feb 21 09:14 /etc/kubernetes/kubelet.conf -rw------- 1 root root 5604 Feb 21 09:14 /etc/kubernetes/scheduler.conf I0221 09:15:04.184085 497077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf I0221 09:15:04.191169 497077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf I0221 09:15:04.198275 497077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf I0221 09:15:04.205826 497077 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1 stdout: stderr: I0221 09:15:04.205882 497077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf I0221 09:15:04.212938 497077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf I0221 09:15:04.221017 497077 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1 stdout: stderr: I0221 09:15:04.221073 497077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf I0221 09:15:04.228660 497077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0221 09:15:04.237858 497077 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml I0221 09:15:04.237919 497077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml" I0221 09:15:04.283303 497077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml" I0221 09:15:04.366796 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:06.366910 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:05.185878 497077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml" I0221 09:15:05.354047 497077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml" I0221 09:15:05.406771 497077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml" I0221 09:15:05.462979 497077 api_server.go:51] waiting for apiserver process to appear ... I0221 09:15:05.463083 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0221 09:15:05.979284 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0221 09:15:06.478843 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0221 09:15:06.978709 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0221 09:15:07.023275 497077 api_server.go:71] duration metric: took 1.56029598s to wait for apiserver process to appear ... I0221 09:15:07.023310 497077 api_server.go:87] waiting for apiserver healthz status ... I0221 09:15:07.023323 497077 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ... I0221 09:15:09.932898 495766 out.go:203] - Generating certificates and keys ... I0221 09:15:09.935933 495766 out.go:203] - Booting up control plane ... I0221 09:15:08.367398 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:10.367690 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:12.867151 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:09.939117 495766 out.go:203] - Configuring RBAC rules ... I0221 09:15:09.941894 495766 cni.go:93] Creating CNI manager for "" I0221 09:15:09.941923 495766 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0221 09:15:09.941953 495766 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0221 09:15:09.942114 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:09.942193 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=9a4c39b14752448ee8b5da7f8bb397c2a16d9ea9 minikube.k8s.io/name=embed-certs-20220221091443-6550 minikube.k8s.io/updated_at=2022_02_21T09_15_09_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:10.395924 495766 ops.go:34] apiserver oom_adj: -16 I0221 09:15:10.396025 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:10.972819 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:11.472464 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:11.972445 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:12.472331 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:12.973190 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:13.473103 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:10.288934 497077 api_server.go:266] https://192.168.67.2:8443/healthz returned 403: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} W0221 09:15:10.288963 497077 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 403: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} I0221 09:15:10.789177 497077 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ... I0221 09:15:10.794490 497077 api_server.go:266] https://192.168.67.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed W0221 09:15:10.794516 497077 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed I0221 09:15:11.290065 497077 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ... I0221 09:15:11.294697 497077 api_server.go:266] https://192.168.67.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed W0221 09:15:11.294728 497077 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed I0221 09:15:11.789231 497077 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ... I0221 09:15:11.806954 497077 api_server.go:266] https://192.168.67.2:8443/healthz returned 200: ok I0221 09:15:11.814884 497077 api_server.go:140] control plane version: v1.23.5-rc.0 I0221 09:15:11.814957 497077 api_server.go:130] duration metric: took 4.791639219s to wait for apiserver health ... I0221 09:15:11.814979 497077 cni.go:93] Creating CNI manager for "" I0221 09:15:11.815050 497077 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0221 09:15:11.815064 497077 system_pods.go:43] waiting for kube-system pods to appear ... I0221 09:15:11.828697 497077 system_pods.go:59] 8 kube-system pods found I0221 09:15:11.828740 497077 system_pods.go:61] "coredns-64897985d-t6lcp" [f53edf0b-bb68-4aa6-83d9-8e1a356dfda9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:15:11.828750 497077 system_pods.go:61] "etcd-no-preload-20220221091339-6550" [3fa24a94-f41c-4cf8-bcb9-4808033aef36] Running I0221 09:15:11.828763 497077 system_pods.go:61] "kube-apiserver-no-preload-20220221091339-6550" [e30a563f-2bc9-4894-9a2d-f87bdb6c96be] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver]) I0221 09:15:11.828773 497077 system_pods.go:61] "kube-controller-manager-no-preload-20220221091339-6550" [7300b4d8-82d8-49ca-b581-7e909cb1917e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager]) I0221 09:15:11.828788 497077 system_pods.go:61] "kube-proxy-hlrh9" [751f88ad-01cb-49a0-947b-a4213748c80e] Running I0221 09:15:11.828795 497077 system_pods.go:61] "kube-scheduler-no-preload-20220221091339-6550" [fbffbf1c-d65f-4cbe-bea6-80bc5da28f8a] Running I0221 09:15:11.828804 497077 system_pods.go:61] "metrics-server-7f49dcbd7-4tqkf" [7f53f035-82f2-4a85-a0ca-dba360593f86] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server]) I0221 09:15:11.828815 497077 system_pods.go:61] "storage-provisioner" [6312fa75-6c0a-4240-b8cf-9dacbb061fa7] Running I0221 09:15:11.828822 497077 system_pods.go:74] duration metric: took 13.746908ms to wait for pod list to return data ... I0221 09:15:11.828836 497077 node_conditions.go:102] verifying NodePressure condition ... I0221 09:15:11.833876 497077 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki I0221 09:15:11.833905 497077 node_conditions.go:123] node cpu capacity is 8 I0221 09:15:11.833915 497077 node_conditions.go:105] duration metric: took 5.074671ms to run NodePressure ... I0221 09:15:11.833934 497077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml" I0221 09:15:12.328996 497077 kubeadm.go:737] waiting for restarted kubelet to initialise ... I0221 09:15:12.333535 497077 kubeadm.go:752] kubelet initialised I0221 09:15:12.333563 497077 kubeadm.go:753] duration metric: took 4.536476ms waiting for restarted kubelet to initialise ... I0221 09:15:12.333572 497077 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 09:15:12.339145 497077 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-t6lcp" in "kube-system" namespace to be "Ready" ... I0221 09:15:14.867238 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:17.367114 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:13.972289 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:14.473131 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:14.972560 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:15.472492 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:15.972292 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:16.473142 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:16.972837 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:17.472652 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:17.972922 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:18.473047 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:14.417136 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:16.417642 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:18.418101 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:19.866690 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:21.866732 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:18.972808 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:19.473175 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:19.972502 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:20.472258 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:20.972822 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:21.472280 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:21.972878 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:22.472293 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:22.972455 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:23.472236 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:23.546435 495766 kubeadm.go:1020] duration metric: took 13.604362704s to wait for elevateKubeSystemPrivileges. I0221 09:15:23.546462 495766 kubeadm.go:393] StartCluster complete in 25.292374548s I0221 09:15:23.546476 495766 settings.go:142] acquiring lock: {Name:mk4400923ef35d7d80e21aa000bc7683aef0fb09 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:15:23.546590 495766 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 09:15:23.548292 495766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig: {Name:mkbaf7b3f4ffbe8a9d57707d423380523fa909e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:15:24.065153 495766 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220221091443-6550" rescaled to 1 I0221 09:15:24.065200 495766 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true} I0221 09:15:24.067480 495766 out.go:176] * Verifying Kubernetes components... I0221 09:15:24.067530 495766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0221 09:15:24.065290 495766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0221 09:15:24.065299 495766 addons.go:415] enableAddons start: toEnable=map[], additional=[] I0221 09:15:24.065489 495766 config.go:176] Loaded profile config "embed-certs-20220221091443-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:15:24.067675 495766 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220221091443-6550" I0221 09:15:24.067689 495766 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220221091443-6550" I0221 09:15:24.067661 495766 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220221091443-6550" I0221 09:15:24.067737 495766 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220221091443-6550" W0221 09:15:24.067746 495766 addons.go:165] addon storage-provisioner should already be in state true I0221 09:15:24.067776 495766 host.go:66] Checking if "embed-certs-20220221091443-6550" exists ... I0221 09:15:24.068065 495766 cli_runner.go:133] Run: docker container inspect embed-certs-20220221091443-6550 --format={{.State.Status}} I0221 09:15:24.068216 495766 cli_runner.go:133] Run: docker container inspect embed-certs-20220221091443-6550 --format={{.State.Status}} I0221 09:15:20.419719 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:22.917623 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:24.117721 495766 out.go:176] - Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0221 09:15:24.117845 495766 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml I0221 09:15:24.117859 495766 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0221 09:15:24.117912 495766 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220221091443-6550 I0221 09:15:24.119141 495766 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220221091443-6550" W0221 09:15:24.119159 495766 addons.go:165] addon default-storageclass should already be in state true I0221 09:15:24.119181 495766 host.go:66] Checking if "embed-certs-20220221091443-6550" exists ... I0221 09:15:24.119561 495766 cli_runner.go:133] Run: docker container inspect embed-certs-20220221091443-6550 --format={{.State.Status}} I0221 09:15:24.150163 495766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.58.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0221 09:15:24.152277 495766 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220221091443-6550" to be "Ready" ... I0221 09:15:24.156494 495766 node_ready.go:49] node "embed-certs-20220221091443-6550" has status "Ready":"True" I0221 09:15:24.156521 495766 node_ready.go:38] duration metric: took 4.208371ms waiting for node "embed-certs-20220221091443-6550" to be "Ready" ... I0221 09:15:24.156533 495766 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 09:15:24.165003 495766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49419 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/embed-certs-20220221091443-6550/id_rsa Username:docker} I0221 09:15:24.168204 495766 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-2pl94" in "kube-system" namespace to be "Ready" ... I0221 09:15:24.168723 495766 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml I0221 09:15:24.168748 495766 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0221 09:15:24.168801 495766 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220221091443-6550 I0221 09:15:24.203690 495766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49419 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/embed-certs-20220221091443-6550/id_rsa Username:docker} I0221 09:15:24.363020 495766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0221 09:15:24.408664 495766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0221 09:15:25.610357 495766 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.58.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.460154783s) I0221 09:15:25.610390 495766 start.go:777] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS I0221 09:15:25.625315 495766 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.262256298s) I0221 09:15:25.710944 495766 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.302239283s) I0221 09:15:23.867202 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:26.367087 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:25.713479 495766 out.go:176] * Enabled addons: default-storageclass, storage-provisioner I0221 09:15:25.713504 495766 addons.go:417] enableAddons completed in 1.648214398s I0221 09:15:26.228068 495766 pod_ready.go:102] pod "coredns-64897985d-2pl94" in "kube-system" namespace has status "Ready":"False" I0221 09:15:26.727622 495766 pod_ready.go:92] pod "coredns-64897985d-2pl94" in "kube-system" namespace has status "Ready":"True" I0221 09:15:26.727655 495766 pod_ready.go:81] duration metric: took 2.559423348s waiting for pod "coredns-64897985d-2pl94" in "kube-system" namespace to be "Ready" ... I0221 09:15:26.727667 495766 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-rcbll" in "kube-system" namespace to be "Ready" ... I0221 09:15:28.737549 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:15:24.919389 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:27.417293 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:28.866946 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:31.367455 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:31.237326 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:15:33.737721 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:15:29.418032 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:31.418185 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:33.418410 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:33.867224 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:36.366594 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:36.237918 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:15:38.737249 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:15:35.918507 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:38.417749 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:38.866204 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:40.866820 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:42.867161 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:40.738215 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:15:43.238090 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:15:40.418302 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:42.917405 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:45.366433 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:47.366782 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:45.737608 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:15:47.737755 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:15:44.918091 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:46.918152 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:49.367183 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:51.866295 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:50.237684 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:15:52.237829 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:15:49.418166 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:51.917568 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:53.917904 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:53.866557 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:55.867219 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:54.737606 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:15:57.237432 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:15:56.417829 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:58.918097 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:58.367535 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:00.866365 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:59.237796 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:01.738167 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:00.919349 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:03.417894 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:03.367494 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:05.367891 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:07.867050 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:04.237059 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:06.237917 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:08.238619 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:05.917722 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:07.918337 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:09.867221 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:12.366939 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:10.737952 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:13.236998 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:09.918793 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:12.418654 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:14.867080 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:17.367523 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:15.237467 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:17.237838 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:14.918524 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:17.418433 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:19.866160 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:21.866878 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:19.737940 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:22.237799 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:19.418605 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:21.917158 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:23.917241 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:24.366585 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:26.366639 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:24.737496 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:27.237354 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:25.918347 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:28.417343 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:28.866983 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:31.367058 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:29.237595 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:31.737862 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:30.417414 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:32.417980 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:33.367175 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:35.367281 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:37.866816 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:34.237755 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:36.737064 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:38.738243 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:34.418103 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:36.917322 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:38.918118 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:40.367242 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:42.867084 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:41.238236 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:43.737080 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:41.418017 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:43.418695 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:44.867117 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:47.367137 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:45.737106 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:48.237075 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:45.918081 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:48.417811 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:49.867322 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:52.366506 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:50.237826 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:52.737823 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:50.917555 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:52.919350 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:54.867538 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:56.867663 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:55.237168 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:57.737187 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:55.418364 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:57.918467 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:59.367240 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:17:01.867170 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:59.737493 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:02.237245 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:00.418173 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:02.418722 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:03.867259 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:17:06.366958 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:17:04.237795 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:06.737122 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:08.737756 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:04.917358 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:07.420503 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:08.367111 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:17:10.866482 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:17:11.236681 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:13.237115 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:09.917193 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:11.917715 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:13.366478 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:17:15.362217 481686 pod_ready.go:81] duration metric: took 4m0.400042148s waiting for pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace to be "Ready" ... E0221 09:17:15.362243 481686 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace to be "Ready" (will not retry!) I0221 09:17:15.362281 481686 pod_ready.go:38] duration metric: took 4m1.599876939s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 09:17:15.362311 481686 kubeadm.go:605] restartCluster took 4m50.480983318s W0221 09:17:15.362454 481686 out.go:241] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" I0221 09:17:15.362498 481686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force" I0221 09:17:15.237295 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:17.737915 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:18.175209 481686 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (2.812684621s) I0221 09:17:18.175276 481686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0221 09:17:18.185025 481686 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0221 09:17:18.192447 481686 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver I0221 09:17:18.192507 481686 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0221 09:17:18.199480 481686 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0221 09:17:18.199532 481686 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0221 09:17:14.418203 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:16.418817 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:18.918397 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:19.004199 481686 out.go:203] - Generating certificates and keys ... I0221 09:17:20.154486 481686 out.go:203] - Booting up control plane ... I0221 09:17:19.738185 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:22.237886 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:20.919065 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:23.417789 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:24.238025 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:26.736754 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:28.737285 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:25.418974 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:27.917549 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:29.697817 481686 out.go:203] - Configuring RBAC rules ... I0221 09:17:30.117382 481686 cni.go:93] Creating CNI manager for "" I0221 09:17:30.117409 481686 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0221 09:17:30.117456 481686 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0221 09:17:30.117488 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:30.117513 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=9a4c39b14752448ee8b5da7f8bb397c2a16d9ea9 minikube.k8s.io/name=old-k8s-version-20220221090948-6550 minikube.k8s.io/updated_at=2022_02_21T09_17_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:30.138076 481686 ops.go:34] apiserver oom_adj: -16 I0221 09:17:30.332701 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:30.962889 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:31.463137 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:31.963106 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:32.462475 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:32.962808 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:30.737617 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:32.737649 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:30.418236 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:32.418744 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:33.463336 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:33.963223 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:34.462476 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:34.963309 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:35.462739 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:35.962494 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:36.462808 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:36.962302 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:37.463170 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:37.962954 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:35.237642 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:37.737932 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:34.917465 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:36.918090 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:38.918163 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:38.462353 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:38.962334 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:39.462945 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:39.963254 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:40.462471 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:40.962357 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:41.463268 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:41.962492 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:42.462438 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:42.962696 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:40.237611 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:42.737701 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:41.417277 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:43.417810 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:43.462358 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:43.963162 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:44.462847 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:44.962328 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:45.462708 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:45.543971 481686 kubeadm.go:1020] duration metric: took 15.42652334s to wait for elevateKubeSystemPrivileges. I0221 09:17:45.544001 481686 kubeadm.go:393] StartCluster complete in 5m20.703452161s I0221 09:17:45.544025 481686 settings.go:142] acquiring lock: {Name:mk4400923ef35d7d80e21aa000bc7683aef0fb09 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:17:45.544116 481686 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 09:17:45.545695 481686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig: {Name:mkbaf7b3f4ffbe8a9d57707d423380523fa909e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:17:46.064567 481686 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20220221090948-6550" rescaled to 1 I0221 09:17:46.064648 481686 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true} I0221 09:17:46.064685 481686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0221 09:17:46.066864 481686 out.go:176] * Verifying Kubernetes components... I0221 09:17:46.064756 481686 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[] I0221 09:17:46.067046 481686 addons.go:65] Setting storage-provisioner=true in profile "old-k8s-version-20220221090948-6550" I0221 09:17:46.067065 481686 addons.go:65] Setting default-storageclass=true in profile "old-k8s-version-20220221090948-6550" I0221 09:17:46.067077 481686 addons.go:153] Setting addon storage-provisioner=true in "old-k8s-version-20220221090948-6550" I0221 09:17:46.067078 481686 addons.go:65] Setting metrics-server=true in profile "old-k8s-version-20220221090948-6550" I0221 09:17:46.067091 481686 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20220221090948-6550" I0221 09:17:46.067051 481686 addons.go:65] Setting dashboard=true in profile "old-k8s-version-20220221090948-6550" I0221 09:17:46.067104 481686 addons.go:153] Setting addon metrics-server=true in "old-k8s-version-20220221090948-6550" W0221 09:17:46.067120 481686 addons.go:165] addon metrics-server should already be in state true I0221 09:17:46.067154 481686 host.go:66] Checking if "old-k8s-version-20220221090948-6550" exists ... W0221 09:17:46.067091 481686 addons.go:165] addon storage-provisioner should already be in state true I0221 09:17:46.067245 481686 host.go:66] Checking if "old-k8s-version-20220221090948-6550" exists ... I0221 09:17:46.067105 481686 addons.go:153] Setting addon dashboard=true in "old-k8s-version-20220221090948-6550" W0221 09:17:46.067344 481686 addons.go:165] addon dashboard should already be in state true I0221 09:17:46.067383 481686 host.go:66] Checking if "old-k8s-version-20220221090948-6550" exists ... I0221 09:17:46.064932 481686 config.go:176] Loaded profile config "old-k8s-version-20220221090948-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0 I0221 09:17:46.067445 481686 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220221090948-6550 --format={{.State.Status}} I0221 09:17:46.066931 481686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0221 09:17:46.067658 481686 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220221090948-6550 --format={{.State.Status}} I0221 09:17:46.067687 481686 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220221090948-6550 --format={{.State.Status}} I0221 09:17:46.067864 481686 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220221090948-6550 --format={{.State.Status}} I0221 09:17:46.114243 481686 out.go:176] - Using image kubernetesui/dashboard:v2.3.1 I0221 09:17:46.115639 481686 out.go:176] - Using image k8s.gcr.io/echoserver:1.4 I0221 09:17:46.115714 481686 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml I0221 09:17:46.115730 481686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes) I0221 09:17:46.115782 481686 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220221090948-6550 I0221 09:17:46.117743 481686 out.go:176] - Using image fake.domain/k8s.gcr.io/echoserver:1.4 I0221 09:17:46.117801 481686 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml I0221 09:17:46.117809 481686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes) I0221 09:17:46.117855 481686 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220221090948-6550 I0221 09:17:46.125475 481686 out.go:176] - Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0221 09:17:46.125624 481686 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml I0221 09:17:46.125645 481686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0221 09:17:46.125705 481686 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220221090948-6550 I0221 09:17:46.135556 481686 addons.go:153] Setting addon default-storageclass=true in "old-k8s-version-20220221090948-6550" W0221 09:17:46.135586 481686 addons.go:165] addon default-storageclass should already be in state true I0221 09:17:46.135615 481686 host.go:66] Checking if "old-k8s-version-20220221090948-6550" exists ... I0221 09:17:46.136085 481686 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220221090948-6550 --format={{.State.Status}} I0221 09:17:46.166437 481686 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20220221090948-6550" to be "Ready" ... I0221 09:17:46.166456 481686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0221 09:17:46.167667 481686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49409 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/old-k8s-version-20220221090948-6550/id_rsa Username:docker} I0221 09:17:46.170013 481686 node_ready.go:49] node "old-k8s-version-20220221090948-6550" has status "Ready":"True" I0221 09:17:46.170038 481686 node_ready.go:38] duration metric: took 3.562771ms waiting for node "old-k8s-version-20220221090948-6550" to be "Ready" ... I0221 09:17:46.170049 481686 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 09:17:46.173374 481686 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-b7jrq" in "kube-system" namespace to be "Ready" ... I0221 09:17:46.180921 481686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49409 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/old-k8s-version-20220221090948-6550/id_rsa Username:docker} I0221 09:17:46.183084 481686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49409 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/old-k8s-version-20220221090948-6550/id_rsa Username:docker} I0221 09:17:46.187194 481686 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml I0221 09:17:46.187215 481686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0221 09:17:46.187277 481686 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220221090948-6550 I0221 09:17:46.237820 481686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49409 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/old-k8s-version-20220221090948-6550/id_rsa Username:docker} I0221 09:17:46.322666 481686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0221 09:17:46.323751 481686 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml I0221 09:17:46.323772 481686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes) I0221 09:17:46.323802 481686 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml I0221 09:17:46.323821 481686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes) I0221 09:17:46.414165 481686 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml I0221 09:17:46.414190 481686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes) I0221 09:17:46.416636 481686 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml I0221 09:17:46.416744 481686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes) I0221 09:17:46.432835 481686 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml I0221 09:17:46.432863 481686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes) I0221 09:17:46.506794 481686 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml I0221 09:17:46.506871 481686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes) I0221 09:17:46.514112 481686 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml I0221 09:17:46.514139 481686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes) I0221 09:17:46.523096 481686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0221 09:17:46.527550 481686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml I0221 09:17:46.532049 481686 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml I0221 09:17:46.532075 481686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes) I0221 09:17:46.617605 481686 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml I0221 09:17:46.617639 481686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes) I0221 09:17:46.704195 481686 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml I0221 09:17:46.704223 481686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes) I0221 09:17:46.726689 481686 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml I0221 09:17:46.726721 481686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes) I0221 09:17:46.825437 481686 start.go:777] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS I0221 09:17:46.831248 481686 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml I0221 09:17:46.831280 481686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes) I0221 09:17:46.921668 481686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml I0221 09:17:47.620931 481686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.093333198s) I0221 09:17:47.620975 481686 addons.go:386] Verifying addon metrics-server=true in "old-k8s-version-20220221090948-6550" I0221 09:17:48.116573 481686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.194843902s) I0221 09:17:45.237588 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:47.237842 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:45.918077 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:47.920821 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:48.118737 481686 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard I0221 09:17:48.118777 481686 addons.go:417] enableAddons completed in 2.054027114s I0221 09:17:48.206323 481686 pod_ready.go:102] pod "coredns-5644d7b6d9-b7jrq" in "kube-system" namespace has status "Ready":"False" I0221 09:17:50.707111 481686 pod_ready.go:102] pod "coredns-5644d7b6d9-b7jrq" in "kube-system" namespace has status "Ready":"False" I0221 09:17:49.238934 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:51.738845 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:50.417131 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:52.418765 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:53.207204 481686 pod_ready.go:102] pod "coredns-5644d7b6d9-b7jrq" in "kube-system" namespace has status "Ready":"False" I0221 09:17:55.683856 481686 pod_ready.go:102] pod "coredns-5644d7b6d9-b7jrq" in "kube-system" namespace has status "Ready":"False" I0221 09:17:57.683807 481686 pod_ready.go:92] pod "coredns-5644d7b6d9-b7jrq" in "kube-system" namespace has status "Ready":"True" I0221 09:17:57.683836 481686 pod_ready.go:81] duration metric: took 11.510430517s waiting for pod "coredns-5644d7b6d9-b7jrq" in "kube-system" namespace to be "Ready" ... I0221 09:17:57.683845 481686 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bmrhh" in "kube-system" namespace to be "Ready" ... I0221 09:17:57.687261 481686 pod_ready.go:92] pod "kube-proxy-bmrhh" in "kube-system" namespace has status "Ready":"True" I0221 09:17:57.687281 481686 pod_ready.go:81] duration metric: took 3.430535ms waiting for pod "kube-proxy-bmrhh" in "kube-system" namespace to be "Ready" ... I0221 09:17:57.687289 481686 pod_ready.go:38] duration metric: took 11.517225526s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 09:17:57.687334 481686 api_server.go:51] waiting for apiserver process to appear ... I0221 09:17:57.687382 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0221 09:17:57.711089 481686 api_server.go:71] duration metric: took 11.646398188s to wait for apiserver process to appear ... I0221 09:17:57.711122 481686 api_server.go:87] waiting for apiserver healthz status ... I0221 09:17:57.711138 481686 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0221 09:17:57.715750 481686 api_server.go:266] https://192.168.49.2:8443/healthz returned 200: ok I0221 09:17:57.716530 481686 api_server.go:140] control plane version: v1.16.0 I0221 09:17:57.716553 481686 api_server.go:130] duration metric: took 5.42444ms to wait for apiserver health ... I0221 09:17:57.716562 481686 system_pods.go:43] waiting for kube-system pods to appear ... I0221 09:17:57.719359 481686 system_pods.go:59] 4 kube-system pods found I0221 09:17:57.719387 481686 system_pods.go:61] "coredns-5644d7b6d9-b7jrq" [adbb6ee0-5428-4641-a9ce-e84924c25533] Running I0221 09:17:57.719393 481686 system_pods.go:61] "kube-proxy-bmrhh" [a1f01d6d-fff3-436b-bbae-7301b4e82f06] Running I0221 09:17:57.719403 481686 system_pods.go:61] "metrics-server-5b7b789f-dghgt" [ba56eb03-4694-4263-afb9-766698a51a11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server]) I0221 09:17:57.719412 481686 system_pods.go:61] "storage-provisioner" [7ca3bdcb-62ac-48f8-98f6-688c160c889a] Running I0221 09:17:57.719418 481686 system_pods.go:74] duration metric: took 2.851415ms to wait for pod list to return data ... I0221 09:17:57.719431 481686 default_sa.go:34] waiting for default service account to be created ... I0221 09:17:57.721393 481686 default_sa.go:45] found service account: "default" I0221 09:17:57.721412 481686 default_sa.go:55] duration metric: took 1.97454ms for default service account to be created ... I0221 09:17:57.721418 481686 system_pods.go:116] waiting for k8s-apps to be running ... I0221 09:17:57.723938 481686 system_pods.go:86] 4 kube-system pods found I0221 09:17:57.723960 481686 system_pods.go:89] "coredns-5644d7b6d9-b7jrq" [adbb6ee0-5428-4641-a9ce-e84924c25533] Running I0221 09:17:57.723967 481686 system_pods.go:89] "kube-proxy-bmrhh" [a1f01d6d-fff3-436b-bbae-7301b4e82f06] Running I0221 09:17:57.723974 481686 system_pods.go:89] "metrics-server-5b7b789f-dghgt" [ba56eb03-4694-4263-afb9-766698a51a11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server]) I0221 09:17:57.723978 481686 system_pods.go:89] "storage-provisioner" [7ca3bdcb-62ac-48f8-98f6-688c160c889a] Running I0221 09:17:57.723994 481686 retry.go:31] will retry after 214.282984ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler I0221 09:17:57.942423 481686 system_pods.go:86] 4 kube-system pods found I0221 09:17:57.942454 481686 system_pods.go:89] "coredns-5644d7b6d9-b7jrq" [adbb6ee0-5428-4641-a9ce-e84924c25533] Running I0221 09:17:57.942462 481686 system_pods.go:89] "kube-proxy-bmrhh" [a1f01d6d-fff3-436b-bbae-7301b4e82f06] Running I0221 09:17:57.942472 481686 system_pods.go:89] "metrics-server-5b7b789f-dghgt" [ba56eb03-4694-4263-afb9-766698a51a11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server]) I0221 09:17:57.942478 481686 system_pods.go:89] "storage-provisioner" [7ca3bdcb-62ac-48f8-98f6-688c160c889a] Running I0221 09:17:57.942495 481686 retry.go:31] will retry after 293.852686ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler I0221 09:17:54.238088 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:56.737044 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:54.917994 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:56.918138 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:58.239424 481686 system_pods.go:86] 4 kube-system pods found I0221 09:17:58.239452 481686 system_pods.go:89] "coredns-5644d7b6d9-b7jrq" [adbb6ee0-5428-4641-a9ce-e84924c25533] Running I0221 09:17:58.239456 481686 system_pods.go:89] "kube-proxy-bmrhh" [a1f01d6d-fff3-436b-bbae-7301b4e82f06] Running I0221 09:17:58.239463 481686 system_pods.go:89] "metrics-server-5b7b789f-dghgt" [ba56eb03-4694-4263-afb9-766698a51a11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server]) I0221 09:17:58.239468 481686 system_pods.go:89] "storage-provisioner" [7ca3bdcb-62ac-48f8-98f6-688c160c889a] Running I0221 09:17:58.239483 481686 retry.go:31] will retry after 355.089487ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler I0221 09:17:58.598172 481686 system_pods.go:86] 4 kube-system pods found I0221 09:17:58.598203 481686 system_pods.go:89] "coredns-5644d7b6d9-b7jrq" [adbb6ee0-5428-4641-a9ce-e84924c25533] Running I0221 09:17:58.598209 481686 system_pods.go:89] "kube-proxy-bmrhh" [a1f01d6d-fff3-436b-bbae-7301b4e82f06] Running I0221 09:17:58.598218 481686 system_pods.go:89] "metrics-server-5b7b789f-dghgt" [ba56eb03-4694-4263-afb9-766698a51a11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server]) I0221 09:17:58.598225 481686 system_pods.go:89] "storage-provisioner" [7ca3bdcb-62ac-48f8-98f6-688c160c889a] Running I0221 09:17:58.598247 481686 retry.go:31] will retry after 480.685997ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler I0221 09:17:59.083302 481686 system_pods.go:86] 4 kube-system pods found I0221 09:17:59.083333 481686 system_pods.go:89] "coredns-5644d7b6d9-b7jrq" [adbb6ee0-5428-4641-a9ce-e84924c25533] Running I0221 09:17:59.083338 481686 system_pods.go:89] "kube-proxy-bmrhh" [a1f01d6d-fff3-436b-bbae-7301b4e82f06] Running I0221 09:17:59.083346 481686 system_pods.go:89] "metrics-server-5b7b789f-dghgt" [ba56eb03-4694-4263-afb9-766698a51a11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server]) I0221 09:17:59.083351 481686 system_pods.go:89] "storage-provisioner" [7ca3bdcb-62ac-48f8-98f6-688c160c889a] Running I0221 09:17:59.083368 481686 retry.go:31] will retry after 544.138839ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler I0221 09:17:59.631168 481686 system_pods.go:86] 4 kube-system pods found I0221 09:17:59.631199 481686 system_pods.go:89] "coredns-5644d7b6d9-b7jrq" [adbb6ee0-5428-4641-a9ce-e84924c25533] Running I0221 09:17:59.631206 481686 system_pods.go:89] "kube-proxy-bmrhh" [a1f01d6d-fff3-436b-bbae-7301b4e82f06] Running I0221 09:17:59.631215 481686 system_pods.go:89] "metrics-server-5b7b789f-dghgt" [ba56eb03-4694-4263-afb9-766698a51a11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server]) I0221 09:17:59.631221 481686 system_pods.go:89] "storage-provisioner" [7ca3bdcb-62ac-48f8-98f6-688c160c889a] Running I0221 09:17:59.631237 481686 retry.go:31] will retry after 684.014074ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler I0221 09:18:00.319146 481686 system_pods.go:86] 4 kube-system pods found I0221 09:18:00.319175 481686 system_pods.go:89] "coredns-5644d7b6d9-b7jrq" [adbb6ee0-5428-4641-a9ce-e84924c25533] Running I0221 09:18:00.319180 481686 system_pods.go:89] "kube-proxy-bmrhh" [a1f01d6d-fff3-436b-bbae-7301b4e82f06] Running I0221 09:18:00.319188 481686 system_pods.go:89] "metrics-server-5b7b789f-dghgt" [ba56eb03-4694-4263-afb9-766698a51a11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server]) I0221 09:18:00.319192 481686 system_pods.go:89] "storage-provisioner" [7ca3bdcb-62ac-48f8-98f6-688c160c889a] Running I0221 09:18:00.319207 481686 retry.go:31] will retry after 1.039068543s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler I0221 09:18:01.362111 481686 system_pods.go:86] 4 kube-system pods found I0221 09:18:01.362142 481686 system_pods.go:89] "coredns-5644d7b6d9-b7jrq" [adbb6ee0-5428-4641-a9ce-e84924c25533] Running I0221 09:18:01.362149 481686 system_pods.go:89] "kube-proxy-bmrhh" [a1f01d6d-fff3-436b-bbae-7301b4e82f06] Running I0221 09:18:01.362158 481686 system_pods.go:89] "metrics-server-5b7b789f-dghgt" [ba56eb03-4694-4263-afb9-766698a51a11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server]) I0221 09:18:01.362164 481686 system_pods.go:89] "storage-provisioner" [7ca3bdcb-62ac-48f8-98f6-688c160c889a] Running I0221 09:18:01.362181 481686 retry.go:31] will retry after 1.02433744s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler I0221 09:18:02.390274 481686 system_pods.go:86] 4 kube-system pods found I0221 09:18:02.390307 481686 system_pods.go:89] "coredns-5644d7b6d9-b7jrq" [adbb6ee0-5428-4641-a9ce-e84924c25533] Running I0221 09:18:02.390312 481686 system_pods.go:89] "kube-proxy-bmrhh" [a1f01d6d-fff3-436b-bbae-7301b4e82f06] Running I0221 09:18:02.390319 481686 system_pods.go:89] "metrics-server-5b7b789f-dghgt" [ba56eb03-4694-4263-afb9-766698a51a11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server]) I0221 09:18:02.390324 481686 system_pods.go:89] "storage-provisioner" [7ca3bdcb-62ac-48f8-98f6-688c160c889a] Running I0221 09:18:02.390338 481686 retry.go:31] will retry after 1.268973106s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler I0221 09:17:59.237324 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:18:01.737286 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:59.417757 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:18:01.918299 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:18:03.664086 481686 system_pods.go:86] 4 kube-system pods found I0221 09:18:03.664125 481686 system_pods.go:89] "coredns-5644d7b6d9-b7jrq" [adbb6ee0-5428-4641-a9ce-e84924c25533] Running I0221 09:18:03.664135 481686 system_pods.go:89] "kube-proxy-bmrhh" [a1f01d6d-fff3-436b-bbae-7301b4e82f06] Running I0221 09:18:03.664149 481686 system_pods.go:89] "metrics-server-5b7b789f-dghgt" [ba56eb03-4694-4263-afb9-766698a51a11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server]) I0221 09:18:03.664160 481686 system_pods.go:89] "storage-provisioner" [7ca3bdcb-62ac-48f8-98f6-688c160c889a] Running I0221 09:18:03.664186 481686 retry.go:31] will retry after 1.733071555s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler I0221 09:18:05.400816 481686 system_pods.go:86] 4 kube-system pods found I0221 09:18:05.400845 481686 system_pods.go:89] "coredns-5644d7b6d9-b7jrq" [adbb6ee0-5428-4641-a9ce-e84924c25533] Running I0221 09:18:05.400850 481686 system_pods.go:89] "kube-proxy-bmrhh" [a1f01d6d-fff3-436b-bbae-7301b4e82f06] Running I0221 09:18:05.400858 481686 system_pods.go:89] "metrics-server-5b7b789f-dghgt" [ba56eb03-4694-4263-afb9-766698a51a11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server]) I0221 09:18:05.400862 481686 system_pods.go:89] "storage-provisioner" [7ca3bdcb-62ac-48f8-98f6-688c160c889a] Running I0221 09:18:05.400878 481686 retry.go:31] will retry after 2.410580953s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler I0221 09:18:07.815378 481686 system_pods.go:86] 4 kube-system pods found I0221 09:18:07.815408 481686 system_pods.go:89] "coredns-5644d7b6d9-b7jrq" [adbb6ee0-5428-4641-a9ce-e84924c25533] Running I0221 09:18:07.815417 481686 system_pods.go:89] "kube-proxy-bmrhh" [a1f01d6d-fff3-436b-bbae-7301b4e82f06] Running I0221 09:18:07.815426 481686 system_pods.go:89] "metrics-server-5b7b789f-dghgt" [ba56eb03-4694-4263-afb9-766698a51a11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server]) I0221 09:18:07.815432 481686 system_pods.go:89] "storage-provisioner" [7ca3bdcb-62ac-48f8-98f6-688c160c889a] Running I0221 09:18:07.815450 481686 retry.go:31] will retry after 3.437877504s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler I0221 09:18:04.237718 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:18:06.737668 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:18:04.417897 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:18:06.918123 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:18:11.259836 481686 system_pods.go:86] 4 kube-system pods found I0221 09:18:11.259863 481686 system_pods.go:89] "coredns-5644d7b6d9-b7jrq" [adbb6ee0-5428-4641-a9ce-e84924c25533] Running I0221 09:18:11.259871 481686 system_pods.go:89] "kube-proxy-bmrhh" [a1f01d6d-fff3-436b-bbae-7301b4e82f06] Running I0221 09:18:11.259877 481686 system_pods.go:89] "metrics-server-5b7b789f-dghgt" [ba56eb03-4694-4263-afb9-766698a51a11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server]) I0221 09:18:11.259882 481686 system_pods.go:89] "storage-provisioner" [7ca3bdcb-62ac-48f8-98f6-688c160c889a] Running I0221 09:18:11.259897 481686 retry.go:31] will retry after 3.261655801s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler I0221 09:18:09.238310 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:18:11.737798 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:18:09.417840 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:18:11.418215 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:18:13.418290 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:18:14.525258 481686 system_pods.go:86] 4 kube-system pods found I0221 09:18:14.525285 481686 system_pods.go:89] "coredns-5644d7b6d9-b7jrq" [adbb6ee0-5428-4641-a9ce-e84924c25533] Running I0221 09:18:14.525290 481686 system_pods.go:89] "kube-proxy-bmrhh" [a1f01d6d-fff3-436b-bbae-7301b4e82f06] Running I0221 09:18:14.525298 481686 system_pods.go:89] "metrics-server-5b7b789f-dghgt" [ba56eb03-4694-4263-afb9-766698a51a11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server]) I0221 09:18:14.525307 481686 system_pods.go:89] "storage-provisioner" [7ca3bdcb-62ac-48f8-98f6-688c160c889a] Running I0221 09:18:14.525326 481686 retry.go:31] will retry after 4.086092664s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler I0221 09:18:14.237001 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:18:16.237084 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:18:18.737356 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:18:15.917952 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:18:17.918144 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:18:18.614985 481686 system_pods.go:86] 4 kube-system pods found I0221 09:18:18.615062 481686 system_pods.go:89] "coredns-5644d7b6d9-b7jrq" [adbb6ee0-5428-4641-a9ce-e84924c25533] Running I0221 09:18:18.615070 481686 system_pods.go:89] "kube-proxy-bmrhh" [a1f01d6d-fff3-436b-bbae-7301b4e82f06] Running I0221 09:18:18.615080 481686 system_pods.go:89] "metrics-server-5b7b789f-dghgt" [ba56eb03-4694-4263-afb9-766698a51a11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server]) I0221 09:18:18.615088 481686 system_pods.go:89] "storage-provisioner" [7ca3bdcb-62ac-48f8-98f6-688c160c889a] Running I0221 09:18:18.615108 481686 retry.go:31] will retry after 6.402197611s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler I0221 09:18:21.237651 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:18:23.737263 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:18:19.918512 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:18:22.417587 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:18:25.021377 481686 system_pods.go:86] 4 kube-system pods found I0221 09:18:25.021411 481686 system_pods.go:89] "coredns-5644d7b6d9-b7jrq" [adbb6ee0-5428-4641-a9ce-e84924c25533] Running I0221 09:18:25.021425 481686 system_pods.go:89] "kube-proxy-bmrhh" [a1f01d6d-fff3-436b-bbae-7301b4e82f06] Running I0221 09:18:25.021439 481686 system_pods.go:89] "metrics-server-5b7b789f-dghgt" [ba56eb03-4694-4263-afb9-766698a51a11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server]) I0221 09:18:25.021446 481686 system_pods.go:89] "storage-provisioner" [7ca3bdcb-62ac-48f8-98f6-688c160c889a] Running I0221 09:18:25.021473 481686 retry.go:31] will retry after 6.062999549s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler I0221 09:18:25.738096 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:18:28.237653 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:18:24.418437 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:18:26.918202 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:18:31.090428 481686 system_pods.go:86] 4 kube-system pods found I0221 09:18:31.090463 481686 system_pods.go:89] "coredns-5644d7b6d9-b7jrq" [adbb6ee0-5428-4641-a9ce-e84924c25533] Running I0221 09:18:31.090470 481686 system_pods.go:89] "kube-proxy-bmrhh" [a1f01d6d-fff3-436b-bbae-7301b4e82f06] Running I0221 09:18:31.090485 481686 system_pods.go:89] "metrics-server-5b7b789f-dghgt" [ba56eb03-4694-4263-afb9-766698a51a11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server]) I0221 09:18:31.090494 481686 system_pods.go:89] "storage-provisioner" [7ca3bdcb-62ac-48f8-98f6-688c160c889a] Running I0221 09:18:31.090512 481686 retry.go:31] will retry after 10.504197539s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler I0221 09:18:30.737781 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:18:33.237355 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:18:29.417994 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:18:31.418203 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:18:33.919742 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:18:35.237575 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:18:37.237773 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" * * ==> Docker <== * -- Logs begin at Mon 2022-02-21 09:07:35 UTC, end at Mon 2022-02-21 09:18:39 UTC. -- Feb 21 09:07:37 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:07:37.592500199Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 21 09:07:37 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:07:37.592523907Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 21 09:07:37 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:07:37.592538696Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Feb 21 09:07:37 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:07:37.592546949Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 21 09:07:37 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:07:37.598167477Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Feb 21 09:07:37 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:07:37.603475567Z" level=warning msg="Your kernel does not support CPU realtime scheduler" Feb 21 09:07:37 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:07:37.603503353Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 21 09:07:37 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:07:37.603508973Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 21 09:07:37 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:07:37.603672025Z" level=info msg="Loading containers: start." Feb 21 09:07:37 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:07:37.688849439Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 21 09:07:37 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:07:37.724291378Z" level=info msg="Loading containers: done." Feb 21 09:07:37 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:07:37.736718437Z" level=info msg="Docker daemon" commit=459d0df graphdriver(s)=overlay2 version=20.10.12 Feb 21 09:07:37 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:07:37.736793937Z" level=info msg="Daemon has completed initialization" Feb 21 09:07:37 kubenet-20220221084933-6550 systemd[1]: Started Docker Application Container Engine. Feb 21 09:07:37 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:07:37.755443583Z" level=info msg="API listen on [::]:2376" Feb 21 09:07:37 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:07:37.759148963Z" level=info msg="API listen on /var/run/docker.sock" Feb 21 09:08:15 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:08:15.430229963Z" level=info msg="ignoring event" container=b4d0b09fc93c25117ea61667b96317884a15c03f4858f4c45bd1e396cd363514 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:08:15 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:08:15.536638943Z" level=info msg="ignoring event" container=eea520917917d4a2be1b0666a121a5d7f45c3d95ac7905327d287d61d815b40e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:08:36 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:08:36.646212875Z" level=info msg="ignoring event" container=d514dd85625fd8c62e58361e26a2c9be6fe300c8b78e3c122e232d1992f24b85 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:09:07 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:09:07.650166793Z" level=info msg="ignoring event" container=a97ee22eacba2dca5bca29703201332d8a9269c81b84c9092251a87ec610e248 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:09:51 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:09:51.587076436Z" level=info msg="ignoring event" container=4bd886a067c518fe30582a3f0670f0a8bf70b070f8181c72f4670434a1b33a60 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:10:49 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:10:49.565679742Z" level=info msg="ignoring event" container=2965c9e60bc0a6c53488139147a9b08a5f0f7d8df4f737f43d7166d1649d012f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:12:10 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:12:10.565718850Z" level=info msg="ignoring event" container=b01b78e17698eaa90b27a2bcb80acab11164a3322c3ba3f8c2b1435e48a1eb8d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:14:11 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:14:11.592208704Z" level=info msg="ignoring event" container=17c37bb932ac684b0cf429ef76c908b6c5e92f52e76facaaba13b647bc4fba44 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:17:34 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:17:34.562442082Z" level=info msg="ignoring event" container=0ff3389b32145fb4ede370416350a585e2c9a8db98aa3d37c88bef7fc9e534c9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 0ff3389b32145 6e38f40d628db About a minute ago Exited storage-provisioner 6 916953401c890 08b974dc9b1e2 k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 6 minutes ago Running dnsutils 0 09669f070f8f2 137bf98e19abd a4ca41631cc7a 10 minutes ago Running coredns 0 a70871c657779 4349ba0d8abd8 2114245ec4d6b 10 minutes ago Running kube-proxy 0 dacf1dd44398f a74500fb26ddd 25f8c7f3da61c 10 minutes ago Running etcd 0 1238581788c25 2162c71d2bacc 62930710c9634 10 minutes ago Running kube-apiserver 0 7dfd27d72f637 dceb444a0ede6 25444908517a5 10 minutes ago Running kube-controller-manager 0 0bda65172cb04 390268e8d3874 aceacb6244f9f 10 minutes ago Running kube-scheduler 0 9dcd04836497d * * ==> coredns [137bf98e19ab] <== * [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" * * ==> describe nodes <== * Name: kubenet-20220221084933-6550 Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=kubenet-20220221084933-6550 kubernetes.io/os=linux minikube.k8s.io/commit=9a4c39b14752448ee8b5da7f8bb397c2a16d9ea9 minikube.k8s.io/name=kubenet-20220221084933-6550 minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2022_02_21T09_07_51_0700 minikube.k8s.io/version=v1.25.1 node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 21 Feb 2022 09:07:47 +0000 Taints: Unschedulable: false Lease: HolderIdentity: kubenet-20220221084933-6550 AcquireTime: RenewTime: Mon, 21 Feb 2022 09:18:34 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Mon, 21 Feb 2022 09:17:33 +0000 Mon, 21 Feb 2022 09:07:45 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Mon, 21 Feb 2022 09:17:33 +0000 Mon, 21 Feb 2022 09:07:45 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Mon, 21 Feb 2022 09:17:33 +0000 Mon, 21 Feb 2022 09:07:45 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Mon, 21 Feb 2022 09:17:33 +0000 Mon, 21 Feb 2022 09:08:01 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.76.2 Hostname: kubenet-20220221084933-6550 Capacity: cpu: 8 ephemeral-storage: 304695084Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32874648Ki pods: 110 Allocatable: cpu: 8 ephemeral-storage: 304695084Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32874648Ki pods: 110 System Info: Machine ID: b6a262faae404a5db719705fd34b5c8b System UUID: 0fc3953c-2ccc-4688-916f-cad0f4a89c0d Boot ID: 36f9c729-2a96-4807-bb74-314dc2113999 Kernel Version: 5.11.0-1029-gcp OS Image: Ubuntu 20.04.2 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.12 Kubelet Version: v1.23.4 Kube-Proxy Version: v1.23.4 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (8 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- default netcat-668db85669-4md9w 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m23s kube-system coredns-64897985d-cx6k8 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 10m kube-system etcd-kubenet-20220221084933-6550 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 10m kube-system kube-apiserver-kubenet-20220221084933-6550 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 10m kube-system kube-controller-manager-kubenet-20220221084933-6550 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 10m kube-system kube-proxy-npgzw 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 10m kube-system kube-scheduler-kubenet-20220221084933-6550 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 10m kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 10m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 750m (9%!)(MISSING) 0 (0%!)(MISSING) memory 170Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 10m kube-proxy Normal NodeHasSufficientMemory 10m kubelet Node kubenet-20220221084933-6550 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 10m kubelet Node kubenet-20220221084933-6550 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 10m kubelet Node kubenet-20220221084933-6550 status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 10m kubelet Updated Node Allocatable limit across pods Normal Starting 10m kubelet Starting kubelet. Normal NodeReady 10m kubelet Node kubenet-20220221084933-6550 status is now: NodeReady * * ==> dmesg <== * [ +0.000008] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 [ +2.963841] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 [ +1.035853] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 [ +1.023933] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 [Feb21 09:14] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +1.035516] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +1.019972] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +2.943777] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000008] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +1.027861] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +1.019959] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +2.951870] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +1.015815] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +1.027946] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 * * ==> etcd [a74500fb26dd] <== * {"level":"info","ts":"2022-02-21T09:07:45.521Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T09:07:45.521Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T09:07:45.521Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T09:07:45.521Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubenet-20220221084933-6550 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"} {"level":"info","ts":"2022-02-21T09:07:45.521Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2022-02-21T09:07:45.521Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2022-02-21T09:07:45.522Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"} {"level":"info","ts":"2022-02-21T09:07:45.522Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"} {"level":"info","ts":"2022-02-21T09:07:45.523Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"} {"level":"info","ts":"2022-02-21T09:07:45.523Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"} {"level":"info","ts":"2022-02-21T09:09:53.822Z","caller":"traceutil/trace.go:171","msg":"trace[491778915] linearizableReadLoop","detail":"{readStateIndex:559; appliedIndex:559; }","duration":"379.022828ms","start":"2022-02-21T09:09:53.443Z","end":"2022-02-21T09:09:53.822Z","steps":["trace[491778915] 'read index received' (duration: 379.013826ms)","trace[491778915] 'applied index is now lower than readState.Index' (duration: 7.979µs)"],"step_count":2} {"level":"warn","ts":"2022-02-21T09:09:53.823Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"379.166345ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csistoragecapacities/\" range_end:\"/registry/csistoragecapacities0\" count_only:true ","response":"range_response_count:0 size:5"} {"level":"warn","ts":"2022-02-21T09:09:53.823Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"310.062402ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" count_only:true ","response":"range_response_count:0 size:7"} {"level":"info","ts":"2022-02-21T09:09:53.823Z","caller":"traceutil/trace.go:171","msg":"trace[1332575891] range","detail":"{range_begin:/registry/csistoragecapacities/; range_end:/registry/csistoragecapacities0; response_count:0; response_revision:522; }","duration":"379.282463ms","start":"2022-02-21T09:09:53.443Z","end":"2022-02-21T09:09:53.823Z","steps":["trace[1332575891] 'agreement among raft nodes before linearized reading' (duration: 379.128871ms)"],"step_count":1} {"level":"info","ts":"2022-02-21T09:09:53.823Z","caller":"traceutil/trace.go:171","msg":"trace[995360368] range","detail":"{range_begin:/registry/controllerrevisions/; range_end:/registry/controllerrevisions0; response_count:0; response_revision:522; }","duration":"310.090407ms","start":"2022-02-21T09:09:53.513Z","end":"2022-02-21T09:09:53.823Z","steps":["trace[995360368] 'agreement among raft nodes before linearized reading' (duration: 310.042415ms)"],"step_count":1} {"level":"warn","ts":"2022-02-21T09:09:53.823Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-02-21T09:09:53.443Z","time spent":"379.334172ms","remote":"127.0.0.1:40772","response type":"/etcdserverpb.KV/Range","request count":0,"request size":68,"response count":0,"response size":28,"request content":"key:\"/registry/csistoragecapacities/\" range_end:\"/registry/csistoragecapacities0\" count_only:true "} {"level":"warn","ts":"2022-02-21T09:09:53.823Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"295.262502ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-64897985d-cx6k8\" ","response":"range_response_count:1 size:4636"} {"level":"warn","ts":"2022-02-21T09:09:53.823Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-02-21T09:09:53.513Z","time spent":"310.195138ms","remote":"127.0.0.1:40808","response type":"/etcdserverpb.KV/Range","request count":0,"request size":66,"response count":1,"response size":30,"request content":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" count_only:true "} {"level":"info","ts":"2022-02-21T09:09:53.823Z","caller":"traceutil/trace.go:171","msg":"trace[970577559] range","detail":"{range_begin:/registry/pods/kube-system/coredns-64897985d-cx6k8; range_end:; response_count:1; response_revision:522; }","duration":"295.3246ms","start":"2022-02-21T09:09:53.527Z","end":"2022-02-21T09:09:53.823Z","steps":["trace[970577559] 'agreement among raft nodes before linearized reading' (duration: 295.211653ms)"],"step_count":1} {"level":"warn","ts":"2022-02-21T09:09:53.823Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"175.25773ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" count_only:true ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-02-21T09:09:53.823Z","caller":"traceutil/trace.go:171","msg":"trace[1487035562] range","detail":"{range_begin:/registry/poddisruptionbudgets/; range_end:/registry/poddisruptionbudgets0; response_count:0; response_revision:522; }","duration":"175.40853ms","start":"2022-02-21T09:09:53.647Z","end":"2022-02-21T09:09:53.823Z","steps":["trace[1487035562] 'agreement among raft nodes before linearized reading' (duration: 175.220445ms)"],"step_count":1} {"level":"warn","ts":"2022-02-21T09:09:54.207Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"145.896975ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-02-21T09:09:54.207Z","caller":"traceutil/trace.go:171","msg":"trace[197536161] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/; range_end:/registry/mutatingwebhookconfigurations0; response_count:0; response_revision:523; }","duration":"145.983587ms","start":"2022-02-21T09:09:54.061Z","end":"2022-02-21T09:09:54.207Z","steps":["trace[197536161] 'agreement among raft nodes before linearized reading' (duration: 89.066028ms)","trace[197536161] 'count revisions from in-memory index tree' (duration: 56.806429ms)"],"step_count":2} {"level":"info","ts":"2022-02-21T09:17:45.538Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":617} {"level":"info","ts":"2022-02-21T09:17:45.539Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":617,"took":"727.44µs"} * * ==> kernel <== * 09:18:40 up 1:01, 0 users, load average: 0.80, 1.43, 2.17 Linux kubenet-20220221084933-6550 5.11.0-1029-gcp #33~20.04.3-Ubuntu SMP Tue Jan 18 12:03:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.2 LTS" * * ==> kube-apiserver [2162c71d2bac] <== * I0221 09:07:47.811713 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0221 09:07:47.820829 1 shared_informer.go:247] Caches are synced for node_authorizer I0221 09:07:47.823835 1 cache.go:39] Caches are synced for autoregister controller I0221 09:07:47.824219 1 cache.go:39] Caches are synced for AvailableConditionController controller I0221 09:07:47.824832 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller I0221 09:07:48.707923 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0221 09:07:48.715124 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0221 09:07:48.727648 1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000 I0221 09:07:48.732249 1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000 I0221 09:07:48.732268 1 storage_scheduling.go:109] all system priority classes are created successfully or already exist. I0221 09:07:49.169828 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0221 09:07:49.212301 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0221 09:07:49.319405 1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1] W0221 09:07:49.325071 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2] I0221 09:07:49.326070 1 controller.go:611] quota admission added evaluator for: endpoints I0221 09:07:49.329548 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io I0221 09:07:49.836734 1 controller.go:611] quota admission added evaluator for: serviceaccounts I0221 09:07:50.931222 1 controller.go:611] quota admission added evaluator for: deployments.apps I0221 09:07:50.941473 1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10] I0221 09:07:50.957175 1 controller.go:611] quota admission added evaluator for: daemonsets.apps I0221 09:07:51.218176 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io I0221 09:08:03.652956 1 controller.go:611] quota admission added evaluator for: replicasets.apps I0221 09:08:03.767118 1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps I0221 09:08:04.943484 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io I0221 09:12:16.210623 1 alloc.go:329] "allocated clusterIPs" service="default/netcat" clusterIPs=map[IPv4:10.108.177.0] * * ==> kube-controller-manager [dceb444a0ede] <== * I0221 09:08:03.001017 1 shared_informer.go:247] Caches are synced for expand I0221 09:08:03.002227 1 shared_informer.go:247] Caches are synced for ephemeral I0221 09:08:03.009218 1 shared_informer.go:247] Caches are synced for ReplicaSet I0221 09:08:03.012397 1 shared_informer.go:247] Caches are synced for node I0221 09:08:03.012453 1 range_allocator.go:173] Starting range CIDR allocator I0221 09:08:03.012460 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator I0221 09:08:03.012472 1 shared_informer.go:247] Caches are synced for cidrallocator I0221 09:08:03.017357 1 range_allocator.go:374] Set node kubenet-20220221084933-6550 PodCIDR to [10.244.0.0/24] I0221 09:08:03.050538 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring I0221 09:08:03.050540 1 shared_informer.go:247] Caches are synced for endpoint I0221 09:08:03.077232 1 shared_informer.go:247] Caches are synced for bootstrap_signer I0221 09:08:03.101353 1 shared_informer.go:247] Caches are synced for crt configmap I0221 09:08:03.158146 1 shared_informer.go:247] Caches are synced for resource quota I0221 09:08:03.204485 1 shared_informer.go:247] Caches are synced for resource quota I0221 09:08:03.619721 1 shared_informer.go:247] Caches are synced for garbage collector I0221 09:08:03.655018 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2" I0221 09:08:03.661327 1 shared_informer.go:247] Caches are synced for garbage collector I0221 09:08:03.661351 1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0221 09:08:03.750524 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1" I0221 09:08:03.810865 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-npgzw" I0221 09:08:04.005746 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-nt6xl" I0221 09:08:04.013863 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-cx6k8" I0221 09:08:04.029545 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-nt6xl" I0221 09:12:16.225556 1 event.go:294] "Event occurred" object="default/netcat" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set netcat-668db85669 to 1" I0221 09:12:16.231770 1 event.go:294] "Event occurred" object="default/netcat-668db85669" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: netcat-668db85669-4md9w" * * ==> kube-proxy [4349ba0d8abd] <== * I0221 09:08:04.903945 1 node.go:163] Successfully retrieved node IP: 192.168.76.2 I0221 09:08:04.904021 1 server_others.go:138] "Detected node IP" address="192.168.76.2" I0221 09:08:04.904060 1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode="" I0221 09:08:04.931851 1 server_others.go:206] "Using iptables Proxier" I0221 09:08:04.932139 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4 I0221 09:08:04.932159 1 server_others.go:214] "Creating dualStackProxier for iptables" I0221 09:08:04.932187 1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6" I0221 09:08:04.939605 1 server.go:656] "Version info" version="v1.23.4" I0221 09:08:04.940800 1 config.go:317] "Starting service config controller" I0221 09:08:04.940850 1 shared_informer.go:240] Waiting for caches to sync for service config I0221 09:08:04.940940 1 config.go:226] "Starting endpoint slice config controller" I0221 09:08:04.940960 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I0221 09:08:05.041845 1 shared_informer.go:247] Caches are synced for service config I0221 09:08:05.041861 1 shared_informer.go:247] Caches are synced for endpoint slice config * * ==> kube-scheduler [390268e8d387] <== * W0221 09:07:47.813936 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0221 09:07:47.814643 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0221 09:07:47.813941 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0221 09:07:47.814672 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0221 09:07:47.814763 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0221 09:07:47.814703 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0221 09:07:48.639695 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0221 09:07:48.639756 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0221 09:07:48.646952 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0221 09:07:48.646990 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0221 09:07:48.736565 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0221 09:07:48.736597 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0221 09:07:48.746758 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0221 09:07:48.746785 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0221 09:07:48.849593 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0221 09:07:48.849635 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0221 09:07:48.859685 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0221 09:07:48.859728 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0221 09:07:48.895897 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0221 09:07:48.895926 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0221 09:07:48.911402 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0221 09:07:48.911440 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0221 09:07:48.918562 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0221 09:07:48.918603 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope I0221 09:07:49.309140 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Mon 2022-02-21 09:07:35 UTC, end at Mon 2022-02-21 09:18:40 UTC. -- Feb 21 09:15:29 kubenet-20220221084933-6550 kubelet[1957]: E0221 09:15:29.427872 1957 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8b81b99b-2ad8-4e82-9589-bb482d30d8b7)\"" pod="kube-system/storage-provisioner" podUID=8b81b99b-2ad8-4e82-9589-bb482d30d8b7 Feb 21 09:15:44 kubenet-20220221084933-6550 kubelet[1957]: I0221 09:15:44.427558 1957 scope.go:110] "RemoveContainer" containerID="17c37bb932ac684b0cf429ef76c908b6c5e92f52e76facaaba13b647bc4fba44" Feb 21 09:15:44 kubenet-20220221084933-6550 kubelet[1957]: E0221 09:15:44.427772 1957 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8b81b99b-2ad8-4e82-9589-bb482d30d8b7)\"" pod="kube-system/storage-provisioner" podUID=8b81b99b-2ad8-4e82-9589-bb482d30d8b7 Feb 21 09:15:58 kubenet-20220221084933-6550 kubelet[1957]: I0221 09:15:58.427716 1957 scope.go:110] "RemoveContainer" containerID="17c37bb932ac684b0cf429ef76c908b6c5e92f52e76facaaba13b647bc4fba44" Feb 21 09:15:58 kubenet-20220221084933-6550 kubelet[1957]: E0221 09:15:58.427951 1957 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8b81b99b-2ad8-4e82-9589-bb482d30d8b7)\"" pod="kube-system/storage-provisioner" podUID=8b81b99b-2ad8-4e82-9589-bb482d30d8b7 Feb 21 09:16:11 kubenet-20220221084933-6550 kubelet[1957]: I0221 09:16:11.427570 1957 scope.go:110] "RemoveContainer" containerID="17c37bb932ac684b0cf429ef76c908b6c5e92f52e76facaaba13b647bc4fba44" Feb 21 09:16:11 kubenet-20220221084933-6550 kubelet[1957]: E0221 09:16:11.427867 1957 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8b81b99b-2ad8-4e82-9589-bb482d30d8b7)\"" pod="kube-system/storage-provisioner" podUID=8b81b99b-2ad8-4e82-9589-bb482d30d8b7 Feb 21 09:16:26 kubenet-20220221084933-6550 kubelet[1957]: I0221 09:16:26.427859 1957 scope.go:110] "RemoveContainer" containerID="17c37bb932ac684b0cf429ef76c908b6c5e92f52e76facaaba13b647bc4fba44" Feb 21 09:16:26 kubenet-20220221084933-6550 kubelet[1957]: E0221 09:16:26.428166 1957 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8b81b99b-2ad8-4e82-9589-bb482d30d8b7)\"" pod="kube-system/storage-provisioner" podUID=8b81b99b-2ad8-4e82-9589-bb482d30d8b7 Feb 21 09:16:37 kubenet-20220221084933-6550 kubelet[1957]: I0221 09:16:37.427517 1957 scope.go:110] "RemoveContainer" containerID="17c37bb932ac684b0cf429ef76c908b6c5e92f52e76facaaba13b647bc4fba44" Feb 21 09:16:37 kubenet-20220221084933-6550 kubelet[1957]: E0221 09:16:37.427743 1957 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8b81b99b-2ad8-4e82-9589-bb482d30d8b7)\"" pod="kube-system/storage-provisioner" podUID=8b81b99b-2ad8-4e82-9589-bb482d30d8b7 Feb 21 09:16:51 kubenet-20220221084933-6550 kubelet[1957]: I0221 09:16:51.428160 1957 scope.go:110] "RemoveContainer" containerID="17c37bb932ac684b0cf429ef76c908b6c5e92f52e76facaaba13b647bc4fba44" Feb 21 09:16:51 kubenet-20220221084933-6550 kubelet[1957]: E0221 09:16:51.428359 1957 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8b81b99b-2ad8-4e82-9589-bb482d30d8b7)\"" pod="kube-system/storage-provisioner" podUID=8b81b99b-2ad8-4e82-9589-bb482d30d8b7 Feb 21 09:17:04 kubenet-20220221084933-6550 kubelet[1957]: I0221 09:17:04.427362 1957 scope.go:110] "RemoveContainer" containerID="17c37bb932ac684b0cf429ef76c908b6c5e92f52e76facaaba13b647bc4fba44" Feb 21 09:17:35 kubenet-20220221084933-6550 kubelet[1957]: I0221 09:17:35.327086 1957 scope.go:110] "RemoveContainer" containerID="17c37bb932ac684b0cf429ef76c908b6c5e92f52e76facaaba13b647bc4fba44" Feb 21 09:17:35 kubenet-20220221084933-6550 kubelet[1957]: I0221 09:17:35.327384 1957 scope.go:110] "RemoveContainer" containerID="0ff3389b32145fb4ede370416350a585e2c9a8db98aa3d37c88bef7fc9e534c9" Feb 21 09:17:35 kubenet-20220221084933-6550 kubelet[1957]: E0221 09:17:35.327634 1957 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8b81b99b-2ad8-4e82-9589-bb482d30d8b7)\"" pod="kube-system/storage-provisioner" podUID=8b81b99b-2ad8-4e82-9589-bb482d30d8b7 Feb 21 09:17:50 kubenet-20220221084933-6550 kubelet[1957]: I0221 09:17:50.428054 1957 scope.go:110] "RemoveContainer" containerID="0ff3389b32145fb4ede370416350a585e2c9a8db98aa3d37c88bef7fc9e534c9" Feb 21 09:17:50 kubenet-20220221084933-6550 kubelet[1957]: E0221 09:17:50.428256 1957 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8b81b99b-2ad8-4e82-9589-bb482d30d8b7)\"" pod="kube-system/storage-provisioner" podUID=8b81b99b-2ad8-4e82-9589-bb482d30d8b7 Feb 21 09:18:04 kubenet-20220221084933-6550 kubelet[1957]: I0221 09:18:04.427285 1957 scope.go:110] "RemoveContainer" containerID="0ff3389b32145fb4ede370416350a585e2c9a8db98aa3d37c88bef7fc9e534c9" Feb 21 09:18:04 kubenet-20220221084933-6550 kubelet[1957]: E0221 09:18:04.427560 1957 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8b81b99b-2ad8-4e82-9589-bb482d30d8b7)\"" pod="kube-system/storage-provisioner" podUID=8b81b99b-2ad8-4e82-9589-bb482d30d8b7 Feb 21 09:18:16 kubenet-20220221084933-6550 kubelet[1957]: I0221 09:18:16.427885 1957 scope.go:110] "RemoveContainer" containerID="0ff3389b32145fb4ede370416350a585e2c9a8db98aa3d37c88bef7fc9e534c9" Feb 21 09:18:16 kubenet-20220221084933-6550 kubelet[1957]: E0221 09:18:16.428125 1957 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8b81b99b-2ad8-4e82-9589-bb482d30d8b7)\"" pod="kube-system/storage-provisioner" podUID=8b81b99b-2ad8-4e82-9589-bb482d30d8b7 Feb 21 09:18:31 kubenet-20220221084933-6550 kubelet[1957]: I0221 09:18:31.428209 1957 scope.go:110] "RemoveContainer" containerID="0ff3389b32145fb4ede370416350a585e2c9a8db98aa3d37c88bef7fc9e534c9" Feb 21 09:18:31 kubenet-20220221084933-6550 kubelet[1957]: E0221 09:18:31.428424 1957 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8b81b99b-2ad8-4e82-9589-bb482d30d8b7)\"" pod="kube-system/storage-provisioner" podUID=8b81b99b-2ad8-4e82-9589-bb482d30d8b7 * * ==> storage-provisioner [0ff3389b3214] <== * I0221 09:17:04.545461 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... F0221 09:17:34.547448 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout -- /stdout -- helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubenet-20220221084933-6550 -n kubenet-20220221084933-6550 helpers_test.go:262: (dbg) Run: kubectl --context kubenet-20220221084933-6550 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running helpers_test.go:271: non-running pods: helpers_test.go:273: ======> post-mortem[TestNetworkPlugins/group/kubenet]: describe non-running pods <====== helpers_test.go:276: (dbg) Run: kubectl --context kubenet-20220221084933-6550 describe pod helpers_test.go:276: (dbg) Non-zero exit: kubectl --context kubenet-20220221084933-6550 describe pod : exit status 1 (40.9887ms) ** stderr ** error: resource name may not be empty ** /stderr ** helpers_test.go:278: kubectl --context kubenet-20220221084933-6550 describe pod : exit status 1 helpers_test.go:176: Cleaning up "kubenet-20220221084933-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p kubenet-20220221084933-6550 E0221 09:18:42.084259 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubenet-20220221084933-6550: (2.685573792s) === CONT TestStartStop/group/disable-driver-mounts start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox helpers_test.go:176: Cleaning up "disable-driver-mounts-20220221091843-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p disable-driver-mounts-20220221091843-6550 === CONT TestStartStop/group/default-k8s-different-port === RUN TestStartStop/group/default-k8s-different-port/serial === RUN TestStartStop/group/default-k8s-different-port/serial/FirstStart start_stop_delete_test.go:171: (dbg) Run: out/minikube-linux-amd64 start -p default-k8s-different-port-20220221091844-6550 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --container-runtime=docker --kubernetes-version=v1.23.4 E0221 09:18:47.801627 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory E0221 09:19:02.714240 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory E0221 09:19:05.983449 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory === CONT TestStartStop/group/old-k8s-version/serial/SecondStart start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20220221090948-6550 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=docker --kubernetes-version=v1.16.0: (6m49.907378351s) start_stop_delete_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220221090948-6550 -n old-k8s-version-20220221090948-6550 E0221 09:19:08.282737 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory === RUN TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop start_stop_delete_test.go:259: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ... helpers_test.go:343: "kubernetes-dashboard-766959b846-sn6bn" [d47e90cd-1050-43a8-8b22-7bc1f011c864] Running E0221 09:19:09.771128 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory start_stop_delete_test.go:259: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014110081s === RUN TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ... helpers_test.go:343: "kubernetes-dashboard-766959b846-sn6bn" [d47e90cd-1050-43a8-8b22-7bc1f011c864] Running start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005849472s start_stop_delete_test.go:276: (dbg) Run: kubectl --context old-k8s-version-20220221090948-6550 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard === RUN TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages start_stop_delete_test.go:289: (dbg) Run: out/minikube-linux-amd64 ssh -p old-k8s-version-20220221090948-6550 "sudo crictl images -o json" start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc === RUN TestStartStop/group/old-k8s-version/serial/Pause start_stop_delete_test.go:296: (dbg) Run: out/minikube-linux-amd64 pause -p old-k8s-version-20220221090948-6550 --alsologtostderr -v=1 start_stop_delete_test.go:296: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220221090948-6550 -n old-k8s-version-20220221090948-6550 start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220221090948-6550 -n old-k8s-version-20220221090948-6550: exit status 2 (409.262476ms) -- stdout -- Paused -- /stdout -- start_stop_delete_test.go:296: status error: exit status 2 (may be ok) start_stop_delete_test.go:296: (dbg) Run: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220221090948-6550 -n old-k8s-version-20220221090948-6550 start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220221090948-6550 -n old-k8s-version-20220221090948-6550: exit status 2 (415.061798ms) -- stdout -- Stopped -- /stdout -- start_stop_delete_test.go:296: status error: exit status 2 (may be ok) start_stop_delete_test.go:296: (dbg) Run: out/minikube-linux-amd64 unpause -p old-k8s-version-20220221090948-6550 --alsologtostderr -v=1 start_stop_delete_test.go:296: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220221090948-6550 -n old-k8s-version-20220221090948-6550 start_stop_delete_test.go:296: (dbg) Run: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220221090948-6550 -n old-k8s-version-20220221090948-6550 === CONT TestStartStop/group/old-k8s-version/serial start_stop_delete_test.go:147: (dbg) Run: out/minikube-linux-amd64 delete -p old-k8s-version-20220221090948-6550 start_stop_delete_test.go:147: (dbg) Done: out/minikube-linux-amd64 delete -p old-k8s-version-20220221090948-6550: (2.820256181s) start_stop_delete_test.go:152: (dbg) Run: kubectl config get-contexts old-k8s-version-20220221090948-6550 start_stop_delete_test.go:152: (dbg) Non-zero exit: kubectl config get-contexts old-k8s-version-20220221090948-6550: exit status 1 (43.606082ms) -- stdout -- CURRENT NAME CLUSTER AUTHINFO NAMESPACE -- /stdout -- ** stderr ** error: context old-k8s-version-20220221090948-6550 not found ** /stderr ** start_stop_delete_test.go:154: config context error: exit status 1 (may be ok) === CONT TestStartStop/group/old-k8s-version helpers_test.go:176: Cleaning up "old-k8s-version-20220221090948-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p old-k8s-version-20220221090948-6550 === CONT TestStartStop/group/newest-cni === RUN TestStartStop/group/newest-cni/serial === RUN TestStartStop/group/newest-cni/serial/FirstStart start_stop_delete_test.go:171: (dbg) Run: out/minikube-linux-amd64 start -p newest-cni-20220221091925-6550 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --container-runtime=docker --kubernetes-version=v1.23.5-rc.0 E0221 09:19:33.148755 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory === CONT TestStartStop/group/embed-certs/serial/FirstStart start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20220221091443-6550 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --container-runtime=docker --kubernetes-version=v1.23.4: (4m54.155259731s) === RUN TestStartStop/group/embed-certs/serial/DeployApp start_stop_delete_test.go:181: (dbg) Run: kubectl --context embed-certs-20220221091443-6550 create -f testdata/busybox.yaml start_stop_delete_test.go:181: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ... helpers_test.go:343: "busybox" [6145738e-6130-4b3d-a3fb-d7a1707425ef] Pending helpers_test.go:343: "busybox" [6145738e-6130-4b3d-a3fb-d7a1707425ef] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox]) helpers_test.go:343: "busybox" [6145738e-6130-4b3d-a3fb-d7a1707425ef] Running E0221 09:19:49.243615 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory start_stop_delete_test.go:181: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 12.01279123s start_stop_delete_test.go:181: (dbg) Run: kubectl --context embed-certs-20220221091443-6550 exec busybox -- /bin/sh -c "ulimit -n" === RUN TestStartStop/group/embed-certs/serial/EnableAddonWhileActive start_stop_delete_test.go:190: (dbg) Run: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20220221091443-6550 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain start_stop_delete_test.go:200: (dbg) Run: kubectl --context embed-certs-20220221091443-6550 describe deploy/metrics-server -n kube-system === RUN TestStartStop/group/embed-certs/serial/Stop start_stop_delete_test.go:213: (dbg) Run: out/minikube-linux-amd64 stop -p embed-certs-20220221091443-6550 --alsologtostderr -v=3 start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20220221091443-6550 --alsologtostderr -v=3: (12.182278691s) === RUN TestStartStop/group/embed-certs/serial/EnableAddonAfterStop start_stop_delete_test.go:224: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220221091443-6550 -n embed-certs-20220221091443-6550 start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220221091443-6550 -n embed-certs-20220221091443-6550: exit status 7 (96.086056ms) -- stdout -- Stopped -- /stdout -- start_stop_delete_test.go:224: status error: exit status 7 (may be ok) start_stop_delete_test.go:231: (dbg) Run: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20220221091443-6550 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 === RUN TestStartStop/group/embed-certs/serial/SecondStart start_stop_delete_test.go:241: (dbg) Run: out/minikube-linux-amd64 start -p embed-certs-20220221091443-6550 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --container-runtime=docker --kubernetes-version=v1.23.4 E0221 09:20:10.800543 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory === CONT TestStartStop/group/newest-cni/serial/FirstStart start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20220221091925-6550 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --container-runtime=docker --kubernetes-version=v1.23.5-rc.0: (51.717775411s) === RUN TestStartStop/group/newest-cni/serial/DeployApp === RUN TestStartStop/group/newest-cni/serial/EnableAddonWhileActive start_stop_delete_test.go:190: (dbg) Run: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20220221091925-6550 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain start_stop_delete_test.go:196: WARNING: cni mode requires additional setup before pods can schedule :( === RUN TestStartStop/group/newest-cni/serial/Stop start_stop_delete_test.go:213: (dbg) Run: out/minikube-linux-amd64 stop -p newest-cni-20220221091925-6550 --alsologtostderr -v=3 start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20220221091925-6550 --alsologtostderr -v=3: (10.955357852s) === RUN TestStartStop/group/newest-cni/serial/EnableAddonAfterStop start_stop_delete_test.go:224: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220221091925-6550 -n newest-cni-20220221091925-6550 start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220221091925-6550 -n newest-cni-20220221091925-6550: exit status 7 (99.293044ms) -- stdout -- Stopped -- /stdout -- start_stop_delete_test.go:224: status error: exit status 7 (may be ok) start_stop_delete_test.go:231: (dbg) Run: out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20220221091925-6550 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 === RUN TestStartStop/group/newest-cni/serial/SecondStart start_stop_delete_test.go:241: (dbg) Run: out/minikube-linux-amd64 start -p newest-cni-20220221091925-6550 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --container-runtime=docker --kubernetes-version=v1.23.5-rc.0 start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20220221091925-6550 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --container-runtime=docker --kubernetes-version=v1.23.5-rc.0: (19.737292261s) start_stop_delete_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220221091925-6550 -n newest-cni-20220221091925-6550 === RUN TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop start_stop_delete_test.go:258: WARNING: cni mode requires additional setup before pods can schedule :( === RUN TestStartStop/group/newest-cni/serial/AddonExistsAfterStop start_stop_delete_test.go:269: WARNING: cni mode requires additional setup before pods can schedule :( === RUN TestStartStop/group/newest-cni/serial/VerifyKubernetesImages start_stop_delete_test.go:289: (dbg) Run: out/minikube-linux-amd64 ssh -p newest-cni-20220221091925-6550 "sudo crictl images -o json" === RUN TestStartStop/group/newest-cni/serial/Pause start_stop_delete_test.go:296: (dbg) Run: out/minikube-linux-amd64 pause -p newest-cni-20220221091925-6550 --alsologtostderr -v=1 start_stop_delete_test.go:296: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220221091925-6550 -n newest-cni-20220221091925-6550 start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220221091925-6550 -n newest-cni-20220221091925-6550: exit status 2 (399.790605ms) -- stdout -- Paused -- /stdout -- start_stop_delete_test.go:296: status error: exit status 2 (may be ok) start_stop_delete_test.go:296: (dbg) Run: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220221091925-6550 -n newest-cni-20220221091925-6550 start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220221091925-6550 -n newest-cni-20220221091925-6550: exit status 2 (413.864586ms) -- stdout -- Stopped -- /stdout -- start_stop_delete_test.go:296: status error: exit status 2 (may be ok) start_stop_delete_test.go:296: (dbg) Run: out/minikube-linux-amd64 unpause -p newest-cni-20220221091925-6550 --alsologtostderr -v=1 start_stop_delete_test.go:296: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220221091925-6550 -n newest-cni-20220221091925-6550 start_stop_delete_test.go:296: (dbg) Run: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220221091925-6550 -n newest-cni-20220221091925-6550 === CONT TestStartStop/group/newest-cni/serial start_stop_delete_test.go:147: (dbg) Run: out/minikube-linux-amd64 delete -p newest-cni-20220221091925-6550 start_stop_delete_test.go:147: (dbg) Done: out/minikube-linux-amd64 delete -p newest-cni-20220221091925-6550: (2.687742753s) start_stop_delete_test.go:152: (dbg) Run: kubectl config get-contexts newest-cni-20220221091925-6550 start_stop_delete_test.go:152: (dbg) Non-zero exit: kubectl config get-contexts newest-cni-20220221091925-6550: exit status 1 (34.299929ms) -- stdout -- CURRENT NAME CLUSTER AUTHINFO NAMESPACE -- /stdout -- ** stderr ** error: context newest-cni-20220221091925-6550 not found ** /stderr ** start_stop_delete_test.go:154: config context error: exit status 1 (may be ok) === CONT TestStartStop/group/newest-cni helpers_test.go:176: Cleaning up "newest-cni-20220221091925-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p newest-cni-20220221091925-6550 --- FAIL: TestNetworkPlugins (1882.40s) --- FAIL: TestNetworkPlugins/group (0.24s) --- SKIP: TestNetworkPlugins/group/flannel (0.24s) --- PASS: TestNetworkPlugins/group/cilium (119.61s) --- PASS: TestNetworkPlugins/group/cilium/Start (97.40s) --- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s) --- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.39s) --- PASS: TestNetworkPlugins/group/cilium/NetCatPod (12.91s) --- PASS: TestNetworkPlugins/group/cilium/DNS (0.18s) --- PASS: TestNetworkPlugins/group/cilium/Localhost (0.13s) --- PASS: TestNetworkPlugins/group/cilium/HairPin (0.20s) --- FAIL: TestNetworkPlugins/group/false (433.35s) --- PASS: TestNetworkPlugins/group/false/Start (42.77s) --- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.41s) --- PASS: TestNetworkPlugins/group/false/NetCatPod (11.21s) --- FAIL: TestNetworkPlugins/group/false/DNS (373.56s) --- FAIL: TestNetworkPlugins/group/custom-weave (524.66s) --- FAIL: TestNetworkPlugins/group/custom-weave/Start (519.15s) --- FAIL: TestNetworkPlugins/group/calico (559.37s) --- FAIL: TestNetworkPlugins/group/calico/Start (553.27s) --- FAIL: TestNetworkPlugins/group/auto (836.10s) --- PASS: TestNetworkPlugins/group/auto/Start (496.11s) --- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.43s) --- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.37s) --- FAIL: TestNetworkPlugins/group/auto/DNS (322.31s) --- FAIL: TestNetworkPlugins/group/kindnet (422.20s) --- PASS: TestNetworkPlugins/group/kindnet/Start (48.67s) --- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s) --- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.43s) --- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.19s) --- FAIL: TestNetworkPlugins/group/kindnet/DNS (352.09s) --- FAIL: TestNetworkPlugins/group/bridge (588.30s) --- PASS: TestNetworkPlugins/group/bridge/Start (290.53s) --- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.37s) --- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.26s) --- FAIL: TestNetworkPlugins/group/bridge/DNS (281.38s) --- FAIL: TestNetworkPlugins/group/enable-default-cni (671.75s) --- PASS: TestNetworkPlugins/group/enable-default-cni/Start (294.69s) --- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s) --- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.22s) --- FAIL: TestNetworkPlugins/group/enable-default-cni/DNS (360.29s) --- FAIL: TestNetworkPlugins/group/kubenet (678.23s) --- PASS: TestNetworkPlugins/group/kubenet/Start (290.28s) --- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.36s) --- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.30s) --- FAIL: TestNetworkPlugins/group/kubenet/DNS (370.31s) E0221 09:21:11.163791 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory E0221 09:21:16.370120 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory E0221 09:21:46.065878 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory E0221 09:21:58.037581 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory E0221 09:21:58.042867 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory E0221 09:21:58.053122 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory E0221 09:21:58.073463 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory E0221 09:21:58.113730 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory E0221 09:21:58.194079 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory E0221 09:21:58.354490 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory E0221 09:21:58.675033 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory E0221 09:21:59.315792 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory E0221 09:22:00.596200 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory E0221 09:22:03.157230 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory E0221 09:22:08.278024 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory E0221 09:22:16.217751 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory E0221 09:22:16.223072 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory E0221 09:22:16.233301 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory E0221 09:22:16.253567 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory E0221 09:22:16.293876 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory E0221 09:22:16.374850 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory E0221 09:22:16.535255 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory E0221 09:22:16.855806 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory E0221 09:22:17.496352 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory E0221 09:22:18.518194 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory E0221 09:22:18.776582 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory E0221 09:22:21.337172 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory E0221 09:22:26.457346 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory E0221 09:22:30.568387 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory E0221 09:22:36.698397 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory E0221 09:22:38.998520 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory E0221 09:22:39.415283 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory E0221 09:22:57.179497 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory E0221 09:23:12.221760 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory E0221 09:23:19.959136 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory E0221 09:23:27.319752 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory E0221 09:23:29.174104 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory E0221 09:23:35.029378 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory === CONT TestStartStop/group/default-k8s-different-port/serial/FirstStart start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20220221091844-6550 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --container-runtime=docker --kubernetes-version=v1.23.4: (4m51.781673931s) === RUN TestStartStop/group/default-k8s-different-port/serial/DeployApp start_stop_delete_test.go:181: (dbg) Run: kubectl --context default-k8s-different-port-20220221091844-6550 create -f testdata/busybox.yaml start_stop_delete_test.go:181: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ... helpers_test.go:343: "busybox" [c30f7999-494b-4682-a5ec-42c5c6cf1a20] Pending helpers_test.go:343: "busybox" [c30f7999-494b-4682-a5ec-42c5c6cf1a20] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox]) E0221 09:23:38.140664 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory helpers_test.go:343: "busybox" [c30f7999-494b-4682-a5ec-42c5c6cf1a20] Running E0221 09:23:42.084580 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory start_stop_delete_test.go:181: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 10.010089357s start_stop_delete_test.go:181: (dbg) Run: kubectl --context default-k8s-different-port-20220221091844-6550 exec busybox -- /bin/sh -c "ulimit -n" === RUN TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive start_stop_delete_test.go:190: (dbg) Run: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20220221091844-6550 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain start_stop_delete_test.go:200: (dbg) Run: kubectl --context default-k8s-different-port-20220221091844-6550 describe deploy/metrics-server -n kube-system === RUN TestStartStop/group/default-k8s-different-port/serial/Stop start_stop_delete_test.go:213: (dbg) Run: out/minikube-linux-amd64 stop -p default-k8s-different-port-20220221091844-6550 --alsologtostderr -v=3 E0221 09:23:55.004562 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-different-port-20220221091844-6550 --alsologtostderr -v=3: (10.72988837s) === RUN TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop start_stop_delete_test.go:224: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220221091844-6550 -n default-k8s-different-port-20220221091844-6550 start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220221091844-6550 -n default-k8s-different-port-20220221091844-6550: exit status 7 (96.393159ms) -- stdout -- Stopped -- /stdout -- start_stop_delete_test.go:224: status error: exit status 7 (may be ok) start_stop_delete_test.go:231: (dbg) Run: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20220221091844-6550 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 === RUN TestStartStop/group/default-k8s-different-port/serial/SecondStart start_stop_delete_test.go:241: (dbg) Run: out/minikube-linux-amd64 start -p default-k8s-different-port-20220221091844-6550 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --container-runtime=docker --kubernetes-version=v1.23.4 E0221 09:24:05.984165 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory === CONT TestStartStop/group/no-preload/serial/SecondStart start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20220221091339-6550 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --container-runtime=docker --kubernetes-version=v1.23.5-rc.0: (9m38.630433421s) start_stop_delete_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220221091339-6550 -n no-preload-20220221091339-6550 E0221 09:24:33.149211 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory === RUN TestStartStop/group/no-preload/serial/UserAppExistsAfterStop start_stop_delete_test.go:259: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ... helpers_test.go:343: "kubernetes-dashboard-ccd587f44-vjq5t" [83e2ef81-6567-4673-87b4-cae081554b67] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) start_stop_delete_test.go:259: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011498501s === RUN TestStartStop/group/no-preload/serial/AddonExistsAfterStop start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ... helpers_test.go:343: "kubernetes-dashboard-ccd587f44-vjq5t" [83e2ef81-6567-4673-87b4-cae081554b67] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) E0221 09:24:41.880136 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005640437s start_stop_delete_test.go:276: (dbg) Run: kubectl --context no-preload-20220221091339-6550 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard === RUN TestStartStop/group/no-preload/serial/VerifyKubernetesImages start_stop_delete_test.go:289: (dbg) Run: out/minikube-linux-amd64 ssh -p no-preload-20220221091339-6550 "sudo crictl images -o json" start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc === RUN TestStartStop/group/no-preload/serial/Pause start_stop_delete_test.go:296: (dbg) Run: out/minikube-linux-amd64 pause -p no-preload-20220221091339-6550 --alsologtostderr -v=1 start_stop_delete_test.go:296: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220221091339-6550 -n no-preload-20220221091339-6550 start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220221091339-6550 -n no-preload-20220221091339-6550: exit status 2 (393.86854ms) -- stdout -- Paused -- /stdout -- start_stop_delete_test.go:296: status error: exit status 2 (may be ok) start_stop_delete_test.go:296: (dbg) Run: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220221091339-6550 -n no-preload-20220221091339-6550 start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220221091339-6550 -n no-preload-20220221091339-6550: exit status 2 (399.924261ms) -- stdout -- Stopped -- /stdout -- start_stop_delete_test.go:296: status error: exit status 2 (may be ok) start_stop_delete_test.go:296: (dbg) Run: out/minikube-linux-amd64 unpause -p no-preload-20220221091339-6550 --alsologtostderr -v=1 start_stop_delete_test.go:296: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220221091339-6550 -n no-preload-20220221091339-6550 start_stop_delete_test.go:296: (dbg) Run: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220221091339-6550 -n no-preload-20220221091339-6550 === CONT TestStartStop/group/no-preload/serial start_stop_delete_test.go:147: (dbg) Run: out/minikube-linux-amd64 delete -p no-preload-20220221091339-6550 start_stop_delete_test.go:147: (dbg) Done: out/minikube-linux-amd64 delete -p no-preload-20220221091339-6550: (2.724306727s) start_stop_delete_test.go:152: (dbg) Run: kubectl config get-contexts no-preload-20220221091339-6550 start_stop_delete_test.go:152: (dbg) Non-zero exit: kubectl config get-contexts no-preload-20220221091339-6550: exit status 1 (33.761064ms) -- stdout -- CURRENT NAME CLUSTER AUTHINFO NAMESPACE -- /stdout -- ** stderr ** error: context no-preload-20220221091339-6550 not found ** /stderr ** start_stop_delete_test.go:154: config context error: exit status 1 (may be ok) === CONT TestStartStop/group/no-preload helpers_test.go:176: Cleaning up "no-preload-20220221091339-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p no-preload-20220221091339-6550 E0221 09:25:00.061534 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory E0221 09:25:10.800176 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory E0221 09:26:16.370598 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory E0221 09:26:46.066471 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory E0221 09:26:58.037765 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory E0221 09:27:09.029154 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory E0221 09:27:13.615043 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory E0221 09:27:16.218020 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory E0221 09:27:25.720874 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory E0221 09:27:30.568574 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory E0221 09:27:43.901971 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory E0221 09:28:09.113114 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory E0221 09:28:27.319564 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory E0221 09:28:29.174209 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory E0221 09:28:35.030393 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory E0221 09:28:42.084504 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory E0221 09:29:05.983525 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory E0221 09:29:16.194027 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory E0221 09:29:33.148745 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory E0221 09:29:34.146672 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/client.crt: no such file or directory E0221 09:29:34.151901 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/client.crt: no such file or directory E0221 09:29:34.162140 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/client.crt: no such file or directory E0221 09:29:34.182364 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/client.crt: no such file or directory E0221 09:29:34.222684 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/client.crt: no such file or directory E0221 09:29:34.303042 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/client.crt: no such file or directory E0221 09:29:34.463459 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/client.crt: no such file or directory E0221 09:29:34.783988 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/client.crt: no such file or directory E0221 09:29:35.424797 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/client.crt: no such file or directory === CONT TestStartStop/group/embed-certs/serial/SecondStart start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20220221091443-6550 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --container-runtime=docker --kubernetes-version=v1.23.4: (9m33.260485478s) start_stop_delete_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220221091443-6550 -n embed-certs-20220221091443-6550 E0221 09:29:36.705395 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/client.crt: no such file or directory === RUN TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop start_stop_delete_test.go:259: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ... helpers_test.go:343: "kubernetes-dashboard-ccd587f44-66fws" [37f40f16-716a-485a-aadc-e72bc75bcda5] Running E0221 09:29:39.265847 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/client.crt: no such file or directory start_stop_delete_test.go:259: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01169684s === RUN TestStartStop/group/embed-certs/serial/AddonExistsAfterStop start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ... helpers_test.go:343: "kubernetes-dashboard-ccd587f44-66fws" [37f40f16-716a-485a-aadc-e72bc75bcda5] Running E0221 09:29:44.387040 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/client.crt: no such file or directory helpers_test.go:343: "kubernetes-dashboard-ccd587f44-66fws" [37f40f16-716a-485a-aadc-e72bc75bcda5] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005766364s start_stop_delete_test.go:276: (dbg) Run: kubectl --context embed-certs-20220221091443-6550 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard === RUN TestStartStop/group/embed-certs/serial/VerifyKubernetesImages start_stop_delete_test.go:289: (dbg) Run: out/minikube-linux-amd64 ssh -p embed-certs-20220221091443-6550 "sudo crictl images -o json" start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc === RUN TestStartStop/group/embed-certs/serial/Pause start_stop_delete_test.go:296: (dbg) Run: out/minikube-linux-amd64 pause -p embed-certs-20220221091443-6550 --alsologtostderr -v=1 start_stop_delete_test.go:296: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220221091443-6550 -n embed-certs-20220221091443-6550 start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220221091443-6550 -n embed-certs-20220221091443-6550: exit status 2 (389.995101ms) -- stdout -- Paused -- /stdout -- start_stop_delete_test.go:296: status error: exit status 2 (may be ok) start_stop_delete_test.go:296: (dbg) Run: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220221091443-6550 -n embed-certs-20220221091443-6550 start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220221091443-6550 -n embed-certs-20220221091443-6550: exit status 2 (389.462879ms) -- stdout -- Stopped -- /stdout -- start_stop_delete_test.go:296: status error: exit status 2 (may be ok) start_stop_delete_test.go:296: (dbg) Run: out/minikube-linux-amd64 unpause -p embed-certs-20220221091443-6550 --alsologtostderr -v=1 start_stop_delete_test.go:296: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220221091443-6550 -n embed-certs-20220221091443-6550 start_stop_delete_test.go:296: (dbg) Run: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220221091443-6550 -n embed-certs-20220221091443-6550 === CONT TestStartStop/group/embed-certs/serial start_stop_delete_test.go:147: (dbg) Run: out/minikube-linux-amd64 delete -p embed-certs-20220221091443-6550 start_stop_delete_test.go:147: (dbg) Done: out/minikube-linux-amd64 delete -p embed-certs-20220221091443-6550: (2.586794288s) start_stop_delete_test.go:152: (dbg) Run: kubectl config get-contexts embed-certs-20220221091443-6550 start_stop_delete_test.go:152: (dbg) Non-zero exit: kubectl config get-contexts embed-certs-20220221091443-6550: exit status 1 (34.33754ms) -- stdout -- CURRENT NAME CLUSTER AUTHINFO NAMESPACE -- /stdout -- ** stderr ** error: context embed-certs-20220221091443-6550 not found ** /stderr ** start_stop_delete_test.go:154: config context error: exit status 1 (may be ok) === CONT TestStartStop/group/embed-certs helpers_test.go:176: Cleaning up "embed-certs-20220221091443-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p embed-certs-20220221091443-6550 E0221 09:29:54.627608 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/client.crt: no such file or directory E0221 09:29:58.075104 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory E0221 09:30:05.131671 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory E0221 09:30:10.799816 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory E0221 09:30:15.108448 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/client.crt: no such file or directory E0221 09:30:56.069077 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/client.crt: no such file or directory E0221 09:31:16.370530 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory E0221 09:31:46.065620 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory E0221 09:31:58.037868 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory E0221 09:32:16.217975 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory E0221 09:32:17.990186 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/client.crt: no such file or directory E0221 09:32:30.569697 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory E0221 09:33:13.844850 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory E0221 09:33:27.320221 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory === CONT TestStartStop/group/default-k8s-different-port/serial/SecondStart start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20220221091844-6550 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --container-runtime=docker --kubernetes-version=v1.23.4: (9m30.845211501s) start_stop_delete_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220221091844-6550 -n default-k8s-different-port-20220221091844-6550 === RUN TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop start_stop_delete_test.go:259: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ... helpers_test.go:343: "kubernetes-dashboard-ccd587f44-bpm6m" [fdee26cb-12aa-49b9-a8f0-78614c3d4bf4] Running E0221 09:33:29.174003 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory start_stop_delete_test.go:259: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011518525s === RUN TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ... helpers_test.go:343: "kubernetes-dashboard-ccd587f44-bpm6m" [fdee26cb-12aa-49b9-a8f0-78614c3d4bf4] Running E0221 09:33:35.030146 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory helpers_test.go:343: "kubernetes-dashboard-ccd587f44-bpm6m" [fdee26cb-12aa-49b9-a8f0-78614c3d4bf4] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006100515s start_stop_delete_test.go:276: (dbg) Run: kubectl --context default-k8s-different-port-20220221091844-6550 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard === RUN TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages start_stop_delete_test.go:289: (dbg) Run: out/minikube-linux-amd64 ssh -p default-k8s-different-port-20220221091844-6550 "sudo crictl images -o json" start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc === RUN TestStartStop/group/default-k8s-different-port/serial/Pause start_stop_delete_test.go:296: (dbg) Run: out/minikube-linux-amd64 pause -p default-k8s-different-port-20220221091844-6550 --alsologtostderr -v=1 start_stop_delete_test.go:296: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220221091844-6550 -n default-k8s-different-port-20220221091844-6550 start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220221091844-6550 -n default-k8s-different-port-20220221091844-6550: exit status 2 (383.550482ms) -- stdout -- Paused -- /stdout -- start_stop_delete_test.go:296: status error: exit status 2 (may be ok) start_stop_delete_test.go:296: (dbg) Run: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220221091844-6550 -n default-k8s-different-port-20220221091844-6550 start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220221091844-6550 -n default-k8s-different-port-20220221091844-6550: exit status 2 (384.278733ms) -- stdout -- Stopped -- /stdout -- start_stop_delete_test.go:296: status error: exit status 2 (may be ok) start_stop_delete_test.go:296: (dbg) Run: out/minikube-linux-amd64 unpause -p default-k8s-different-port-20220221091844-6550 --alsologtostderr -v=1 start_stop_delete_test.go:296: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220221091844-6550 -n default-k8s-different-port-20220221091844-6550 E0221 09:33:42.084310 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory start_stop_delete_test.go:296: (dbg) Run: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220221091844-6550 -n default-k8s-different-port-20220221091844-6550 === CONT TestStartStop/group/default-k8s-different-port/serial start_stop_delete_test.go:147: (dbg) Run: out/minikube-linux-amd64 delete -p default-k8s-different-port-20220221091844-6550 start_stop_delete_test.go:147: (dbg) Done: out/minikube-linux-amd64 delete -p default-k8s-different-port-20220221091844-6550: (2.505042669s) start_stop_delete_test.go:152: (dbg) Run: kubectl config get-contexts default-k8s-different-port-20220221091844-6550 start_stop_delete_test.go:152: (dbg) Non-zero exit: kubectl config get-contexts default-k8s-different-port-20220221091844-6550: exit status 1 (34.614631ms) -- stdout -- CURRENT NAME CLUSTER AUTHINFO NAMESPACE -- /stdout -- ** stderr ** error: context default-k8s-different-port-20220221091844-6550 not found ** /stderr ** start_stop_delete_test.go:154: config context error: exit status 1 (may be ok) === CONT TestStartStop/group/default-k8s-different-port helpers_test.go:176: Cleaning up "default-k8s-different-port-20220221091844-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p default-k8s-different-port-20220221091844-6550 --- PASS: TestStartStop (2497.15s) --- PASS: TestStartStop/group (0.00s) --- SKIP: TestStartStop/group/disable-driver-mounts (0.49s) --- PASS: TestStartStop/group/old-k8s-version (576.91s) --- PASS: TestStartStop/group/old-k8s-version/serial (576.42s) --- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (129.32s) --- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.32s) --- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.64s) --- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.97s) --- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s) --- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (410.35s) --- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s) --- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s) --- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.43s) --- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.25s) --- PASS: TestStartStop/group/newest-cni (90.71s) --- PASS: TestStartStop/group/newest-cni/serial (90.25s) --- PASS: TestStartStop/group/newest-cni/serial/FirstStart (51.72s) --- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s) --- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.98s) --- PASS: TestStartStop/group/newest-cni/serial/Stop (10.96s) --- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s) --- PASS: TestStartStop/group/newest-cni/serial/SecondStart (20.15s) --- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s) --- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s) --- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.40s) --- PASS: TestStartStop/group/newest-cni/serial/Pause (3.12s) --- PASS: TestStartStop/group/no-preload (670.90s) --- PASS: TestStartStop/group/no-preload/serial (670.53s) --- PASS: TestStartStop/group/no-preload/serial/FirstStart (54.58s) --- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.45s) --- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.88s) --- PASS: TestStartStop/group/no-preload/serial/Stop (10.88s) --- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s) --- PASS: TestStartStop/group/no-preload/serial/SecondStart (579.08s) --- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s) --- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.19s) --- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.37s) --- PASS: TestStartStop/group/no-preload/serial/Pause (3.11s) --- PASS: TestStartStop/group/embed-certs (909.76s) --- PASS: TestStartStop/group/embed-certs/serial (909.46s) --- PASS: TestStartStop/group/embed-certs/serial/FirstStart (294.16s) --- PASS: TestStartStop/group/embed-certs/serial/DeployApp (12.34s) --- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.64s) --- PASS: TestStartStop/group/embed-certs/serial/Stop (12.18s) --- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s) --- PASS: TestStartStop/group/embed-certs/serial/SecondStart (573.67s) --- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s) --- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.18s) --- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.37s) --- PASS: TestStartStop/group/embed-certs/serial/Pause (3.08s) --- PASS: TestStartStop/group/default-k8s-different-port (901.32s) --- PASS: TestStartStop/group/default-k8s-different-port/serial (901.09s) --- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (291.78s) --- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (10.45s) --- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.61s) --- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (10.73s) --- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.20s) --- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (571.24s) --- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.01s) --- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.20s) --- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.37s) --- PASS: TestStartStop/group/default-k8s-different-port/serial/Pause (2.95s) FAIL Tests completed in 1h8m38.053061484s (result code 1) === Skipped === SKIP: . TestDownloadOnly/v1.16.0/cached-images (0.00s) aaa_download_only_test.go:123: Preload exists, images won't be cached --- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s) === SKIP: . TestDownloadOnly/v1.16.0/binaries (0.00s) aaa_download_only_test.go:142: Preload exists, binaries are present within. --- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s) === SKIP: . TestDownloadOnly/v1.16.0/kubectl (0.00s) aaa_download_only_test.go:158: Test for darwin and windows --- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s) === SKIP: . TestDownloadOnly/v1.23.4/cached-images (0.00s) aaa_download_only_test.go:123: Preload exists, images won't be cached --- SKIP: TestDownloadOnly/v1.23.4/cached-images (0.00s) === SKIP: . TestDownloadOnly/v1.23.4/binaries (0.00s) aaa_download_only_test.go:142: Preload exists, binaries are present within. --- SKIP: TestDownloadOnly/v1.23.4/binaries (0.00s) === SKIP: . TestDownloadOnly/v1.23.4/kubectl (0.00s) aaa_download_only_test.go:158: Test for darwin and windows --- SKIP: TestDownloadOnly/v1.23.4/kubectl (0.00s) === SKIP: . TestDownloadOnly/v1.23.5-rc.0/preload-exists (0.17s) aaa_download_only_test.go:113: No preload image --- SKIP: TestDownloadOnly/v1.23.5-rc.0/preload-exists (0.17s) === SKIP: . TestDownloadOnly/v1.23.5-rc.0/kubectl (0.00s) aaa_download_only_test.go:158: Test for darwin and windows --- SKIP: TestDownloadOnly/v1.23.5-rc.0/kubectl (0.00s) === SKIP: . TestAddons/parallel/Olm (0.00s) addons_test.go:449: Skipping Olm addon till images are fixed --- SKIP: TestAddons/parallel/Olm (0.00s) === SKIP: . TestHyperKitDriverInstallOrUpdate (0.00s) driver_install_or_update_test.go:114: Skip if not darwin. === SKIP: . TestHyperkitDriverSkipUpgrade (0.00s) driver_install_or_update_test.go:187: Skip if not darwin. === SKIP: . TestFunctional/parallel/PodmanEnv (0.00s) functional_test.go:546: only validate podman env with docker container runtime, currently testing docker --- SKIP: TestFunctional/parallel/PodmanEnv (0.00s) === SKIP: . TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s) functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding --- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s) === SKIP: . TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s) functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding --- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s) === SKIP: . TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s) functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding --- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s) === SKIP: . TestGvisorAddon (0.00s) gvisor_addon_test.go:35: skipping test because --gvisor=false === SKIP: . TestChangeNoneUser (0.00s) none_test.go:39: Only test none driver. === SKIP: . TestScheduledStopWindows (0.00s) scheduled_stop_test.go:43: test only runs on windows === SKIP: . TestNetworkPlugins/group/flannel (0.24s) net_test.go:77: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory helpers_test.go:176: Cleaning up "flannel-20220221084933-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p flannel-20220221084933-6550 --- SKIP: TestNetworkPlugins/group/flannel (0.24s) === SKIP: . TestStartStop/group/disable-driver-mounts (0.49s) start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox helpers_test.go:176: Cleaning up "disable-driver-mounts-20220221091843-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p disable-driver-mounts-20220221091843-6550 --- SKIP: TestStartStop/group/disable-driver-mounts (0.49s) === Failed === FAIL: . TestDownloadOnly/v1.23.5-rc.0/cached-images (0.00s) aaa_download_only_test.go:135: expected image file exist at "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.23.5-rc.0" but got error: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.23.5-rc.0: no such file or directory aaa_download_only_test.go:135: expected image file exist at "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.23.5-rc.0" but got error: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.23.5-rc.0: no such file or directory aaa_download_only_test.go:135: expected image file exist at "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.23.5-rc.0" but got error: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.23.5-rc.0: no such file or directory aaa_download_only_test.go:135: expected image file exist at "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.23.5-rc.0" but got error: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.23.5-rc.0: no such file or directory aaa_download_only_test.go:135: expected image file exist at "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/pause_3.6" but got error: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/pause_3.6: no such file or directory aaa_download_only_test.go:135: expected image file exist at "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/etcd_3.5.1-0" but got error: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/etcd_3.5.1-0: no such file or directory aaa_download_only_test.go:135: expected image file exist at "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.6" but got error: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.6: no such file or directory aaa_download_only_test.go:135: expected image file exist at "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" but got error: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5: no such file or directory aaa_download_only_test.go:135: expected image file exist at "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1" but got error: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1: no such file or directory aaa_download_only_test.go:135: expected image file exist at "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7" but got error: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7: no such file or directory --- FAIL: TestDownloadOnly/v1.23.5-rc.0/cached-images (0.00s) === FAIL: . TestDownloadOnly/v1.23.5-rc.0 (17.71s) helpers_test.go:223: -----------------------post-mortem-------------------------------- helpers_test.go:231: ======> post-mortem[TestDownloadOnly/v1.23.5-rc.0]: docker inspect <====== helpers_test.go:232: (dbg) Run: docker inspect download-only-20220221082507-6550 helpers_test.go:232: (dbg) Non-zero exit: docker inspect download-only-20220221082507-6550: exit status 1 (40.845631ms) -- stdout -- [] -- /stdout -- ** stderr ** Error: No such object: download-only-20220221082507-6550 ** /stderr ** helpers_test.go:234: failed to get docker inspect: exit status 1 helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p download-only-20220221082507-6550 -n download-only-20220221082507-6550 helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p download-only-20220221082507-6550 -n download-only-20220221082507-6550: exit status 7 (57.816622ms) -- stdout -- Nonexistent -- /stdout -- helpers_test.go:240: status error: exit status 7 (may be ok) helpers_test.go:242: "download-only-20220221082507-6550" host is not running, skipping log retrieval (state="Nonexistent") --- FAIL: TestDownloadOnly/v1.23.5-rc.0 (17.71s) === FAIL: . TestDownloadOnly (33.58s) helpers_test.go:176: Cleaning up "download-only-20220221082507-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p download-only-20220221082507-6550 === FAIL: . TestNetworkPlugins/group/false/DNS (373.56s) net_test.go:163: (dbg) Run: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.166804753s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default E0221 08:56:49.826677 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.200105307s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.141837539s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default E0221 08:57:30.568649 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.135333876s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.15077008s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.16303599s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default E0221 08:58:29.174144 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.136895197s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.156602484s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 08:59:05.984234 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory net_test.go:163: (dbg) Run: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default E0221 08:59:33.149060 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.249345945s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 08:59:33.667848 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory net_test.go:163: (dbg) Run: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.151779206s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 09:00:10.800062 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory E0221 09:00:10.805340 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory E0221 09:00:10.815646 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory E0221 09:00:10.835911 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory E0221 09:00:10.876175 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory E0221 09:00:10.956525 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory E0221 09:00:11.116743 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory E0221 09:00:11.437135 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory E0221 09:00:12.077939 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory E0221 09:00:13.358145 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory E0221 09:00:15.918473 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory E0221 09:00:21.038745 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory E0221 09:00:31.279147 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory net_test.go:163: (dbg) Run: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default E0221 09:00:51.760221 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.157586177s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 09:01:32.721004 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory net_test.go:163: (dbg) Run: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.165653864s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1 net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"* --- FAIL: TestNetworkPlugins/group/false/DNS (373.56s) === FAIL: . TestNetworkPlugins/group/false (433.35s) net_test.go:198: "false" test finished in 13m6.955811393s, failed=true net_test.go:199: *** TestNetworkPlugins/group/false FAILED at 2022-02-21 09:02:40.957031533 +0000 UTC m=+2253.719351118 helpers_test.go:223: -----------------------post-mortem-------------------------------- helpers_test.go:231: ======> post-mortem[TestNetworkPlugins/group/false]: docker inspect <====== helpers_test.go:232: (dbg) Run: docker inspect false-20220221084934-6550 helpers_test.go:236: (dbg) docker inspect false-20220221084934-6550: -- stdout -- [ { "Id": "15fce63787daed6db999b1f309207ad80e337bb9e4d52f5345338a7851dc6bbf", "Created": "2022-02-21T08:55:40.800071805Z", "Path": "/usr/local/bin/entrypoint", "Args": [ "/sbin/init" ], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 241367, "ExitCode": 0, "Error": "", "StartedAt": "2022-02-21T08:55:41.193088059Z", "FinishedAt": "0001-01-01T00:00:00Z" }, "Image": "sha256:1312ccd2422d964b2df363d606d0c016d6acbc1ddf0211c26a74717f2897dc43", "ResolvConfPath": "/var/lib/docker/containers/15fce63787daed6db999b1f309207ad80e337bb9e4d52f5345338a7851dc6bbf/resolv.conf", "HostnamePath": "/var/lib/docker/containers/15fce63787daed6db999b1f309207ad80e337bb9e4d52f5345338a7851dc6bbf/hostname", "HostsPath": "/var/lib/docker/containers/15fce63787daed6db999b1f309207ad80e337bb9e4d52f5345338a7851dc6bbf/hosts", "LogPath": "/var/lib/docker/containers/15fce63787daed6db999b1f309207ad80e337bb9e4d52f5345338a7851dc6bbf/15fce63787daed6db999b1f309207ad80e337bb9e4d52f5345338a7851dc6bbf-json.log", "Name": "/false-20220221084934-6550", "RestartCount": 0, "Driver": "overlay2", "Platform": "linux", "MountLabel": "", "ProcessLabel": "", "AppArmorProfile": "unconfined", "ExecIDs": null, "HostConfig": { "Binds": [ "/lib/modules:/lib/modules:ro", "false-20220221084934-6550:/var" ], "ContainerIDFile": "", "LogConfig": { "Type": "json-file", "Config": {} }, "NetworkMode": "false-20220221084934-6550", "PortBindings": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ] }, "RestartPolicy": { "Name": "no", "MaximumRetryCount": 0 }, "AutoRemove": false, "VolumeDriver": "", "VolumesFrom": null, "CapAdd": null, "CapDrop": null, "CgroupnsMode": "host", "Dns": [], "DnsOptions": [], "DnsSearch": [], "ExtraHosts": null, "GroupAdd": null, "IpcMode": "private", "Cgroup": "", "Links": null, "OomScoreAdj": 0, "PidMode": "", "Privileged": true, "PublishAllPorts": false, "ReadonlyRootfs": false, "SecurityOpt": [ "seccomp=unconfined", "apparmor=unconfined", "label=disable" ], "Tmpfs": { "/run": "", "/tmp": "" }, "UTSMode": "", "UsernsMode": "", "ShmSize": 67108864, "Runtime": "runc", "ConsoleSize": [ 0, 0 ], "Isolation": "", "CpuShares": 0, "Memory": 0, "NanoCpus": 2000000000, "CgroupParent": "", "BlkioWeight": 0, "BlkioWeightDevice": [], "BlkioDeviceReadBps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteIOps": null, "CpuPeriod": 0, "CpuQuota": 0, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": [ { "PathOnHost": "/dev/fuse", "PathInContainer": "/dev/fuse", "CgroupPermissions": "rwm" } ], "DeviceCgroupRules": null, "DeviceRequests": null, "KernelMemory": 0, "KernelMemoryTCP": 0, "MemoryReservation": 0, "MemorySwap": 0, "MemorySwappiness": null, "OomKillDisable": false, "PidsLimit": null, "Ulimits": null, "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0, "MaskedPaths": null, "ReadonlyPaths": null }, "GraphDriver": { "Data": { "LowerDir": "/var/lib/docker/overlay2/8476a74746b06da4b8103d2b58bd9ac39378d43f651a88ef625032c36ce98148-init/diff:/var/lib/docker/overlay2/a149cc5f42de5038ba3e5f21bd82a6356aa9456aec11a3690e569db72ca8eaf5/diff:/var/lib/docker/overlay2/fce4ef34d92ff253276822af091ec16578b924486e0e565c7f34b479bdbe1606/diff:/var/lib/docker/overlay2/a38e10ed288ad73a8ce637c6219d5b13318bb72210ce4fb6291dd57d238df611/diff:/var/lib/docker/overlay2/93c18ddce49a4eb2d344b0579d7f58df999d53bb329042f78a5b3f40296f83cd/diff:/var/lib/docker/overlay2/d9ee14262b31fff87924cf913d835b02182e6a8dc3783ecb5f67dbd29faa079c/diff:/var/lib/docker/overlay2/8a0b39fc5ebcb9761ad433102244050fab339d3e027a5a30a7e01d121fe362e4/diff:/var/lib/docker/overlay2/7a6b41deef37b6400fac0c94871f73040d4cae157de2e929b7a0bf96f0e353aa/diff:/var/lib/docker/overlay2/49d913ee587718efcf521a65de0109df5c2c42644f75690beed97a437d623b66/diff:/var/lib/docker/overlay2/89e8fa960313dfb6ece4007576bf5d89e2b0ea68a2bde9de5527647a7df6ff0f/diff:/var/lib/docker/overlay2/0d2632e6ee2f4198f3c7c0d37c31ee03d33864e0131ae85dd3001adecfae53db/diff:/var/lib/docker/overlay2/b28a3b2bbe2e231d00863aaaf6be28d2e5a76197f2be7934f18720a71f2435a8/diff:/var/lib/docker/overlay2/6f88e83ff5bb26bf909248d8df5eacf4d01f72523927fd7433a925c525fbd274/diff:/var/lib/docker/overlay2/c2547d18d2639fa842b0977952fa69cffd7b076e8db357ef0ec62fae9676619b/diff:/var/lib/docker/overlay2/34465ba37bb8eef87906c38cb891a738c54a16dbf47d27fef95cee96f0b62c4f/diff:/var/lib/docker/overlay2/50d3383518bb8e6a01e1cfcae621b8c2238dc54ae94d63606d145633045bc6cd/diff:/var/lib/docker/overlay2/83561f107aed2bf9cb840ff66aceb882d0e39bcced79ef27ce67c17e063ba536/diff:/var/lib/docker/overlay2/bfb7c0c91e0a7649286abe3a6b02df4585a931ffa3a6b1e4846aa1b98eb0d997/diff:/var/lib/docker/overlay2/80589044da151f3d3eed7824d236bc8a19697182abca8610eb34f1c3cae53bf0/diff:/var/lib/docker/overlay2/5a336e956e467ad50830aa4b9ed5b65cb3ef08e6d49a4900acd0440ca3c03c3a/diff:/var/lib/docker/overlay2/f7b3446c0dca80a2053183e360de8f99e3a31e87ad633fcc23f14b2045643596/diff:/var/lib/docker/overlay2/03688a16272f13f172ed8baa181df2a1702e08edbc9c5a4a767265adb2c00b79/diff:/var/lib/docker/overlay2/c5445afaabab58221dbffedde261b3f9948116ccba1344b6f3dd4fdfaf5c98cd/diff:/var/lib/docker/overlay2/704464fd20f23f93d0ad3c3c58103f01fc8a0d76d70bc191f718273e7412bb42/diff:/var/lib/docker/overlay2/a344f9ca1bf1b674ebf560c2a0498913ed90c836e3b4f5b1968022f7f7e54055/diff:/var/lib/docker/overlay2/929f053e6a327a7203e8d8c281f827d02ad922fdabbdfcc5453003daba88cdf4/diff:/var/lib/docker/overlay2/d6a160f0ffe94e5abf99d928a638f6a6220b24c7ba643a438c7557146d240f70/diff:/var/lib/docker/overlay2/c6b58e187a2df10fd55e19e6ad38c20f5ebf5e62617a59b6ce4afe0fc712c0f6/diff:/var/lib/docker/overlay2/fef43deeff878af42cc1f31a1f265b66e642aa166cec02eba5a0c82fb6d70872/diff:/var/lib/docker/overlay2/0d7bbf19182056a05efca89485df54da5fba9903000517fc9e7c2072aa7ba62e/diff:/var/lib/docker/overlay2/5840748964e9ce3890a2fa4fc2e3fb0635426f944b42b3128808a7aca816bf48/diff:/var/lib/docker/overlay2/a94126962e9f346960e3f570b96e0a6e987d255fcba30bf3fd76b227e66275c6/diff:/var/lib/docker/overlay2/211ba88e1557a352838b4896dff8efac70330c1424cff237613c4186994ec037/diff:/var/lib/docker/overlay2/c04720d23411d6693832990e2c79bc529ade609401eaeee8d159f5a6d74d986e/diff:/var/lib/docker/overlay2/70bfe328c89d3a3902a4ba591874fe936425b208ec8e39c2e25d60f21b212300/diff:/var/lib/docker/overlay2/7dcce8bec6dd9561387116c9f724b9101e94e15c0e19c7880ce2b4e96b2921a6/diff:/var/lib/docker/overlay2/81463813805e91be42d9f71b5c63823a80abaaa42d8638034eaddcd66c148310/diff:/var/lib/docker/overlay2/c7dbb1344cf7167b07d58d49902e0db9335bc92a71a324c15dadc14c0aa67c45/diff:/var/lib/docker/overlay2/909df0d6b92b92bcf6e376f0e33d120c53cd24ad0b997d31b5a84f3d1ab89769/diff:/var/lib/docker/overlay2/eacfeaee6ed4aa14e847adad2c6d03439c53944654cc2779cc5c80d239ab323c/diff:/var/lib/docker/overlay2/48c58492169f9043495b11b47439f4dcdeb168525e5bb3a8c130c1ddd5ec882d/diff:/var/lib/docker/overlay2/4520a69f91e766b7694473c2107cc0644394d65da900259eb5933779accdc86d/diff:/var/lib/docker/overlay2/68aafdd300c4b4ebf94976a352959e75fa170653a69e13da38650a447cd5cb2f/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394cfc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff", "MergedDir": "/var/lib/docker/overlay2/8476a74746b06da4b8103d2b58bd9ac39378d43f651a88ef625032c36ce98148/merged", "UpperDir": "/var/lib/docker/overlay2/8476a74746b06da4b8103d2b58bd9ac39378d43f651a88ef625032c36ce98148/diff", "WorkDir": "/var/lib/docker/overlay2/8476a74746b06da4b8103d2b58bd9ac39378d43f651a88ef625032c36ce98148/work" }, "Name": "overlay2" }, "Mounts": [ { "Type": "bind", "Source": "/lib/modules", "Destination": "/lib/modules", "Mode": "ro", "RW": false, "Propagation": "rprivate" }, { "Type": "volume", "Name": "false-20220221084934-6550", "Source": "/var/lib/docker/volumes/false-20220221084934-6550/_data", "Destination": "/var", "Driver": "local", "Mode": "z", "RW": true, "Propagation": "" } ], "Config": { "Hostname": "false-20220221084934-6550", "Domainname": "", "User": "root", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "ExposedPorts": { "22/tcp": {}, "2376/tcp": {}, "32443/tcp": {}, "5000/tcp": {}, "8443/tcp": {} }, "Tty": true, "OpenStdin": false, "StdinOnce": false, "Env": [ "container=docker", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" ], "Cmd": null, "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2", "Volumes": null, "WorkingDir": "", "Entrypoint": [ "/usr/local/bin/entrypoint", "/sbin/init" ], "OnBuild": null, "Labels": { "created_by.minikube.sigs.k8s.io": "true", "mode.minikube.sigs.k8s.io": "false-20220221084934-6550", "name.minikube.sigs.k8s.io": "false-20220221084934-6550", "role.minikube.sigs.k8s.io": "" }, "StopSignal": "SIGRTMIN+3" }, "NetworkSettings": { "Bridge": "", "SandboxID": "c211cf589b40d4695a2757fea5bb7e84dcd2b6ac82849ffdcdccf4a415c7b962", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49374" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49373" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49370" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49372" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49371" } ] }, "SandboxKey": "/var/run/docker/netns/c211cf589b40", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null, "EndpointID": "", "Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "MacAddress": "", "Networks": { "false-20220221084934-6550": { "IPAMConfig": { "IPv4Address": "192.168.49.2" }, "Links": null, "Aliases": [ "15fce63787da", "false-20220221084934-6550" ], "NetworkID": "3aad4971443d81c436ad1afc5aaa14cfa5d6ed96df4c643898db907a8582d794", "EndpointID": "dfdc0aaf7bd14326a76ea9cab50ae553d8a473ab0d8abcd391cdde039b786634", "Gateway": "192.168.49.1", "IPAddress": "192.168.49.2", "IPPrefixLen": 24, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:c0:a8:31:02", "DriverOpts": null } } } } ] -- /stdout -- helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p false-20220221084934-6550 -n false-20220221084934-6550 helpers_test.go:245: <<< TestNetworkPlugins/group/false FAILED: start of post-mortem logs <<< helpers_test.go:246: ======> post-mortem[TestNetworkPlugins/group/false]: minikube logs <====== helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 -p false-20220221084934-6550 logs -n 25 helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p false-20220221084934-6550 logs -n 25: (1.277167613s) helpers_test.go:253: TestNetworkPlugins/group/false logs: -- stdout -- * * ==> Audit <== * |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| | start | -p | kubernetes-upgrade-20220221085141-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:52:37 UTC | Mon, 21 Feb 2022 08:53:05 UTC | | | kubernetes-upgrade-20220221085141-6550 | | | | | | | | --memory=2200 | | | | | | | | --kubernetes-version=v1.23.5-rc.0 | | | | | | | | --alsologtostderr -v=1 --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | stop | -p | NoKubernetes-20220221085208-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:05 UTC | Mon, 21 Feb 2022 08:53:06 UTC | | | NoKubernetes-20220221085208-6550 | | | | | | | start | -p | NoKubernetes-20220221085208-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:06 UTC | Mon, 21 Feb 2022 08:53:13 UTC | | | NoKubernetes-20220221085208-6550 | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | delete | -p | NoKubernetes-20220221085208-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:13 UTC | Mon, 21 Feb 2022 08:53:15 UTC | | | NoKubernetes-20220221085208-6550 | | | | | | | start | -p | kubernetes-upgrade-20220221085141-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:05 UTC | Mon, 21 Feb 2022 08:53:21 UTC | | | kubernetes-upgrade-20220221085141-6550 | | | | | | | | --memory=2200 | | | | | | | | --kubernetes-version=v1.23.5-rc.0 | | | | | | | | --alsologtostderr -v=1 --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | start | -p pause-20220221085158-6550 | pause-20220221085158-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:52:46 UTC | Mon, 21 Feb 2022 08:53:25 UTC | | | --alsologtostderr | | | | | | | | -v=1 --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | pause | -p pause-20220221085158-6550 | pause-20220221085158-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:25 UTC | Mon, 21 Feb 2022 08:53:26 UTC | | | --alsologtostderr -v=5 | | | | | | | unpause | -p pause-20220221085158-6550 | pause-20220221085158-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:27 UTC | Mon, 21 Feb 2022 08:53:28 UTC | | | --alsologtostderr -v=5 | | | | | | | pause | -p pause-20220221085158-6550 | pause-20220221085158-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:28 UTC | Mon, 21 Feb 2022 08:53:29 UTC | | | --alsologtostderr -v=5 | | | | | | | delete | -p | kubernetes-upgrade-20220221085141-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:22 UTC | Mon, 21 Feb 2022 08:53:29 UTC | | | kubernetes-upgrade-20220221085141-6550 | | | | | | | delete | -p pause-20220221085158-6550 | pause-20220221085158-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:29 UTC | Mon, 21 Feb 2022 08:53:32 UTC | | | --alsologtostderr -v=5 | | | | | | | profile | list --output json | minikube | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:32 UTC | Mon, 21 Feb 2022 08:53:32 UTC | | delete | -p pause-20220221085158-6550 | pause-20220221085158-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:33 UTC | Mon, 21 Feb 2022 08:53:33 UTC | | start | -p | stopped-upgrade-20220221085315-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:00 UTC | Mon, 21 Feb 2022 08:54:26 UTC | | | stopped-upgrade-20220221085315-6550 | | | | | | | | --memory=2200 --alsologtostderr | | | | | | | | -v=1 --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | logs | -p | stopped-upgrade-20220221085315-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:27 UTC | Mon, 21 Feb 2022 08:54:29 UTC | | | stopped-upgrade-20220221085315-6550 | | | | | | | delete | -p | stopped-upgrade-20220221085315-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:29 UTC | Mon, 21 Feb 2022 08:54:31 UTC | | | stopped-upgrade-20220221085315-6550 | | | | | | | start | -p | cert-expiration-20220221085105-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:40 UTC | Mon, 21 Feb 2022 08:54:44 UTC | | | cert-expiration-20220221085105-6550 | | | | | | | | --memory=2048 | | | | | | | | --cert-expiration=8760h | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | delete | -p | cert-expiration-20220221085105-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:44 UTC | Mon, 21 Feb 2022 08:54:47 UTC | | | cert-expiration-20220221085105-6550 | | | | | | | start | -p cilium-20220221084934-6550 | cilium-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:33 UTC | Mon, 21 Feb 2022 08:55:10 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=cilium --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p cilium-20220221084934-6550 | cilium-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:55:15 UTC | Mon, 21 Feb 2022 08:55:16 UTC | | | pgrep -a kubelet | | | | | | | delete | -p cilium-20220221084934-6550 | cilium-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:55:29 UTC | Mon, 21 Feb 2022 08:55:33 UTC | | start | -p false-20220221084934-6550 | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:55:33 UTC | Mon, 21 Feb 2022 08:56:15 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=false --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p false-20220221084934-6550 | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:56:15 UTC | Mon, 21 Feb 2022 08:56:16 UTC | | | pgrep -a kubelet | | | | | | | start | -p auto-20220221084933-6550 | auto-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:29 UTC | Mon, 21 Feb 2022 09:01:45 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p auto-20220221084933-6550 | auto-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:01:45 UTC | Mon, 21 Feb 2022 09:01:45 UTC | | | pgrep -a kubelet | | | | | | |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| * * ==> Last Start <== * Log file created at: 2022/02/21 08:55:33 Running on machine: ubuntu-20-agent-5 Binary: Built with gc go1.17.7 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0221 08:55:33.077855 239635 out.go:297] Setting OutFile to fd 1 ... I0221 08:55:33.078244 239635 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 08:55:33.078260 239635 out.go:310] Setting ErrFile to fd 2... I0221 08:55:33.078267 239635 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 08:55:33.078547 239635 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/bin I0221 08:55:33.079122 239635 out.go:304] Setting JSON to false I0221 08:55:33.104574 239635 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2287,"bootTime":1645431446,"procs":1006,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"} I0221 08:55:33.104709 239635 start.go:122] virtualization: kvm guest I0221 08:55:33.107749 239635 out.go:176] * [false-20220221084934-6550] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64) I0221 08:55:33.109511 239635 out.go:176] - MINIKUBE_LOCATION=13641 I0221 08:55:33.108048 239635 notify.go:193] Checking for updates... I0221 08:55:33.111043 239635 out.go:176] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true I0221 08:55:33.112576 239635 out.go:176] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 08:55:33.114627 239635 out.go:176] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube I0221 08:55:33.116118 239635 out.go:176] - MINIKUBE_BIN=out/minikube-linux-amd64 I0221 08:55:33.116659 239635 config.go:176] Loaded profile config "auto-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 08:55:33.116787 239635 config.go:176] Loaded profile config "calico-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 08:55:33.116906 239635 config.go:176] Loaded profile config "custom-weave-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 08:55:33.116975 239635 driver.go:344] Setting default libvirt URI to qemu:///system I0221 08:55:33.167303 239635 docker.go:132] docker version: linux-20.10.12 I0221 08:55:33.167394 239635 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 08:55:33.276263 239635 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2022-02-21 08:55:33.197540287 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 08:55:33.276357 239635 docker.go:237] overlay module found I0221 08:55:33.279678 239635 out.go:176] * Using the docker driver based on user configuration I0221 08:55:33.279708 239635 start.go:281] selected driver: docker I0221 08:55:33.279713 239635 start.go:798] validating driver "docker" against I0221 08:55:33.279735 239635 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} W0221 08:55:33.279796 239635 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. W0221 08:55:33.279816 239635 out.go:241] ! Your cgroup does not allow setting memory. I0221 08:55:33.281318 239635 out.go:176] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities I0221 08:55:33.281928 239635 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 08:55:33.384711 239635 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2022-02-21 08:55:33.318375786 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 08:55:33.384840 239635 start_flags.go:288] no existing cluster config was found, will generate one from the flags I0221 08:55:33.384981 239635 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m I0221 08:55:33.385004 239635 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] I0221 08:55:33.385016 239635 cni.go:93] Creating CNI manager for "false" I0221 08:55:33.385025 239635 start_flags.go:302] config: {Name:false-20220221084934-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:false-20220221084934-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 08:55:33.387333 239635 out.go:176] * Starting control plane node false-20220221084934-6550 in cluster false-20220221084934-6550 I0221 08:55:33.387377 239635 cache.go:120] Beginning downloading kic base image for docker with docker I0221 08:55:33.388617 239635 out.go:176] * Pulling base image ... I0221 08:55:33.388646 239635 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 08:55:33.388678 239635 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4 I0221 08:55:33.388691 239635 cache.go:57] Caching tarball of preloaded images I0221 08:55:33.388734 239635 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon I0221 08:55:33.388928 239635 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0221 08:55:33.388943 239635 cache.go:60] Finished verifying existence of preloaded tar for v1.23.4 on docker I0221 08:55:33.389067 239635 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/config.json ... I0221 08:55:33.389095 239635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/config.json: {Name:mk5e5f0594e41817331267f3d5f1d321ef035e70 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:55:33.445771 239635 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon, skipping pull I0221 08:55:33.445812 239635 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 exists in daemon, skipping load I0221 08:55:33.445833 239635 cache.go:208] Successfully downloaded all kic artifacts I0221 08:55:33.445873 239635 start.go:313] acquiring machines lock for false-20220221084934-6550: {Name:mk2f605a05695ae89fd93473685b8b7565d11497 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 08:55:33.446033 239635 start.go:317] acquired machines lock for "false-20220221084934-6550" in 132.793µs I0221 08:55:33.446064 239635 start.go:89] Provisioning new machine with config: &{Name:false-20220221084934-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:false-20220221084934-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true} I0221 08:55:33.446171 239635 start.go:126] createHost starting for "" (driver="docker") I0221 08:55:31.725027 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:55:32.608422 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:34.608830 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:33.593883 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:35.603309 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:33.448704 239635 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ... I0221 08:55:33.448996 239635 start.go:160] libmachine.API.Create for "false-20220221084934-6550" (driver="docker") I0221 08:55:33.449030 239635 client.go:168] LocalClient.Create starting I0221 08:55:33.449145 239635 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem I0221 08:55:33.449186 239635 main.go:130] libmachine: Decoding PEM data... I0221 08:55:33.449221 239635 main.go:130] libmachine: Parsing certificate... I0221 08:55:33.449284 239635 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem I0221 08:55:33.449304 239635 main.go:130] libmachine: Decoding PEM data... I0221 08:55:33.449317 239635 main.go:130] libmachine: Parsing certificate... I0221 08:55:33.449799 239635 cli_runner.go:133] Run: docker network inspect false-20220221084934-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0221 08:55:33.484614 239635 cli_runner.go:180] docker network inspect false-20220221084934-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0221 08:55:33.484710 239635 network_create.go:254] running [docker network inspect false-20220221084934-6550] to gather additional debugging logs... I0221 08:55:33.484745 239635 cli_runner.go:133] Run: docker network inspect false-20220221084934-6550 W0221 08:55:33.525919 239635 cli_runner.go:180] docker network inspect false-20220221084934-6550 returned with exit code 1 I0221 08:55:33.525955 239635 network_create.go:257] error running [docker network inspect false-20220221084934-6550]: docker network inspect false-20220221084934-6550: exit status 1 stdout: [] stderr: Error: No such network: false-20220221084934-6550 I0221 08:55:33.525971 239635 network_create.go:259] output of [docker network inspect false-20220221084934-6550]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: false-20220221084934-6550 ** /stderr ** I0221 08:55:33.526037 239635 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 08:55:33.567653 239635 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000010890] misses:0} I0221 08:55:33.567721 239635 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}} I0221 08:55:33.567755 239635 network_create.go:106] attempt to create docker network false-20220221084934-6550 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ... I0221 08:55:33.567812 239635 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220221084934-6550 I0221 08:55:33.653745 239635 network_create.go:90] docker network false-20220221084934-6550 192.168.49.0/24 created I0221 08:55:33.653798 239635 kic.go:106] calculated static IP "192.168.49.2" for the "false-20220221084934-6550" container I0221 08:55:33.653860 239635 cli_runner.go:133] Run: docker ps -a --format {{.Names}} I0221 08:55:33.689779 239635 cli_runner.go:133] Run: docker volume create false-20220221084934-6550 --label name.minikube.sigs.k8s.io=false-20220221084934-6550 --label created_by.minikube.sigs.k8s.io=true I0221 08:55:33.735189 239635 oci.go:102] Successfully created a docker volume false-20220221084934-6550 I0221 08:55:33.735273 239635 cli_runner.go:133] Run: docker run --rm --name false-20220221084934-6550-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20220221084934-6550 --entrypoint /usr/bin/test -v false-20220221084934-6550:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -d /var/lib I0221 08:55:34.516202 239635 oci.go:106] Successfully prepared a docker volume false-20220221084934-6550 I0221 08:55:34.516254 239635 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 08:55:34.516274 239635 kic.go:179] Starting extracting preloaded images to volume ... I0221 08:55:34.516349 239635 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-20220221084934-6550:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir I0221 08:55:34.770308 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:55:37.819456 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:55:37.082853 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:39.082914 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:41.084278 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:38.094303 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:40.594209 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:40.648604 239635 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-20220221084934-6550:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir: (6.132215188s) I0221 08:55:40.648640 239635 kic.go:188] duration metric: took 6.132363 seconds to extract preloaded images to volume W0221 08:55:40.648677 239635 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted. W0221 08:55:40.648691 239635 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. I0221 08:55:40.648745 239635 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'" I0221 08:55:40.764575 239635 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname false-20220221084934-6550 --name false-20220221084934-6550 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20220221084934-6550 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=false-20220221084934-6550 --network false-20220221084934-6550 --ip 192.168.49.2 --volume false-20220221084934-6550:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 I0221 08:55:41.202324 239635 cli_runner.go:133] Run: docker container inspect false-20220221084934-6550 --format={{.State.Running}} I0221 08:55:41.240762 239635 cli_runner.go:133] Run: docker container inspect false-20220221084934-6550 --format={{.State.Status}} I0221 08:55:41.276327 239635 cli_runner.go:133] Run: docker exec false-20220221084934-6550 stat /var/lib/dpkg/alternatives/iptables I0221 08:55:41.344357 239635 oci.go:281] the created container "false-20220221084934-6550" has a running status. I0221 08:55:41.344393 239635 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/false-20220221084934-6550/id_rsa... I0221 08:55:41.689215 239635 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/false-20220221084934-6550/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0221 08:55:41.804415 239635 cli_runner.go:133] Run: docker container inspect false-20220221084934-6550 --format={{.State.Status}} I0221 08:55:41.852428 239635 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0221 08:55:41.852455 239635 kic_runner.go:114] Args: [docker exec --privileged false-20220221084934-6550 chown docker:docker /home/docker/.ssh/authorized_keys] I0221 08:55:41.945935 239635 cli_runner.go:133] Run: docker container inspect false-20220221084934-6550 --format={{.State.Status}} I0221 08:55:41.987788 239635 machine.go:88] provisioning docker machine ... I0221 08:55:41.987822 239635 ubuntu.go:169] provisioning hostname "false-20220221084934-6550" I0221 08:55:41.987877 239635 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220221084934-6550 I0221 08:55:42.028280 239635 main.go:130] libmachine: Using SSH client type: native I0221 08:55:42.028524 239635 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49374 } I0221 08:55:42.028549 239635 main.go:130] libmachine: About to run SSH command: sudo hostname false-20220221084934-6550 && echo "false-20220221084934-6550" | sudo tee /etc/hostname I0221 08:55:42.166605 239635 main.go:130] libmachine: SSH cmd err, output: : false-20220221084934-6550 I0221 08:55:42.166723 239635 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220221084934-6550 I0221 08:55:42.200507 239635 main.go:130] libmachine: Using SSH client type: native I0221 08:55:42.200766 239635 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49374 } I0221 08:55:42.200798 239635 main.go:130] libmachine: About to run SSH command: if ! grep -xq '.*\sfalse-20220221084934-6550' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 false-20220221084934-6550/g' /etc/hosts; else echo '127.0.1.1 false-20220221084934-6550' | sudo tee -a /etc/hosts; fi fi I0221 08:55:42.331358 239635 main.go:130] libmachine: SSH cmd err, output: : I0221 08:55:42.331392 239635 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube} I0221 08:55:42.331422 239635 ubuntu.go:177] setting up certificates I0221 08:55:42.331432 239635 provision.go:83] configureAuth start I0221 08:55:42.331488 239635 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-20220221084934-6550 I0221 08:55:42.370135 239635 provision.go:138] copyHostCerts I0221 08:55:42.370196 239635 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem, removing ... I0221 08:55:42.370203 239635 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem I0221 08:55:42.370259 239635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem (1082 bytes) I0221 08:55:42.370365 239635 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem, removing ... I0221 08:55:42.370382 239635 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem I0221 08:55:42.370415 239635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem (1123 bytes) I0221 08:55:42.370470 239635 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem, removing ... I0221 08:55:42.370481 239635 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem I0221 08:55:42.370500 239635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem (1675 bytes) I0221 08:55:42.370567 239635 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem org=jenkins.false-20220221084934-6550 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube false-20220221084934-6550] I0221 08:55:42.479623 239635 provision.go:172] copyRemoteCerts I0221 08:55:42.479692 239635 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0221 08:55:42.479733 239635 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220221084934-6550 I0221 08:55:42.515815 239635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49374 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/false-20220221084934-6550/id_rsa Username:docker} I0221 08:55:42.602649 239635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes) I0221 08:55:42.621359 239635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes) I0221 08:55:42.640061 239635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I0221 08:55:42.658008 239635 provision.go:86] duration metric: configureAuth took 326.566164ms I0221 08:55:42.658036 239635 ubuntu.go:193] setting minikube options for container-runtime I0221 08:55:42.658187 239635 config.go:176] Loaded profile config "false-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 08:55:42.658226 239635 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220221084934-6550 I0221 08:55:42.693609 239635 main.go:130] libmachine: Using SSH client type: native I0221 08:55:42.693748 239635 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49374 } I0221 08:55:42.693763 239635 main.go:130] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0221 08:55:42.827656 239635 main.go:130] libmachine: SSH cmd err, output: : overlay I0221 08:55:42.827684 239635 ubuntu.go:71] root file system type: overlay I0221 08:55:42.827880 239635 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0221 08:55:42.827961 239635 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220221084934-6550 I0221 08:55:42.869428 239635 main.go:130] libmachine: Using SSH client type: native I0221 08:55:42.869587 239635 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49374 } I0221 08:55:42.869645 239635 main.go:130] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0221 08:55:43.004737 239635 main.go:130] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0221 08:55:43.004838 239635 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220221084934-6550 I0221 08:55:40.855138 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:55:43.899128 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:55:43.042605 239635 main.go:130] libmachine: Using SSH client type: native I0221 08:55:43.042768 239635 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49374 } I0221 08:55:43.042815 239635 main.go:130] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0221 08:55:43.849500 239635 main.go:130] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2021-12-13 11:43:42.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2022-02-21 08:55:43.002186410 +0000 @@ -1,30 +1,32 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com +BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target -Requires=docker.socket containerd.service +Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always - -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 +Restart=on-failure -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +34,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0221 08:55:43.849532 239635 machine.go:91] provisioned docker machine in 1.86172323s I0221 08:55:43.849543 239635 client.go:171] LocalClient.Create took 10.400507664s I0221 08:55:43.849553 239635 start.go:168] duration metric: libmachine.API.Create for "false-20220221084934-6550" took 10.400558541s I0221 08:55:43.849560 239635 start.go:267] post-start starting for "false-20220221084934-6550" (driver="docker") I0221 08:55:43.849565 239635 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0221 08:55:43.849623 239635 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0221 08:55:43.849660 239635 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220221084934-6550 I0221 08:55:43.883791 239635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49374 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/false-20220221084934-6550/id_rsa Username:docker} I0221 08:55:43.975048 239635 ssh_runner.go:195] Run: cat /etc/os-release I0221 08:55:43.977983 239635 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0221 08:55:43.978007 239635 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0221 08:55:43.978016 239635 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0221 08:55:43.978020 239635 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0221 08:55:43.978030 239635 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/addons for local assets ... I0221 08:55:43.978083 239635 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files for local assets ... I0221 08:55:43.978146 239635 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem -> 65502.pem in /etc/ssl/certs I0221 08:55:43.978220 239635 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs I0221 08:55:43.985681 239635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /etc/ssl/certs/65502.pem (1708 bytes) I0221 08:55:44.006873 239635 start.go:270] post-start completed in 157.29813ms I0221 08:55:44.007350 239635 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-20220221084934-6550 I0221 08:55:44.048382 239635 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/config.json ... I0221 08:55:44.048631 239635 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0221 08:55:44.048678 239635 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220221084934-6550 I0221 08:55:44.082129 239635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49374 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/false-20220221084934-6550/id_rsa Username:docker} I0221 08:55:44.171620 239635 start.go:129] duration metric: createHost completed in 10.725436331s I0221 08:55:44.171655 239635 start.go:80] releasing machines lock for "false-20220221084934-6550", held for 10.725604036s I0221 08:55:44.171749 239635 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-20220221084934-6550 I0221 08:55:44.208115 239635 ssh_runner.go:195] Run: systemctl --version I0221 08:55:44.208167 239635 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220221084934-6550 I0221 08:55:44.208180 239635 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/ I0221 08:55:44.208237 239635 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220221084934-6550 I0221 08:55:44.251474 239635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49374 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/false-20220221084934-6550/id_rsa Username:docker} I0221 08:55:44.257426 239635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49374 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/false-20220221084934-6550/id_rsa Username:docker} I0221 08:55:44.486291 239635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0221 08:55:44.496111 239635 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 08:55:44.506776 239635 cruntime.go:272] skipping containerd shutdown because we are bound to it I0221 08:55:44.506843 239635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0221 08:55:44.520408 239635 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0221 08:55:44.537069 239635 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0221 08:55:44.627356 239635 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0221 08:55:44.721100 239635 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 08:55:44.732712 239635 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0221 08:55:44.818414 239635 ssh_runner.go:195] Run: sudo systemctl start docker I0221 08:55:44.830044 239635 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 08:55:44.877999 239635 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 08:55:44.924321 239635 out.go:203] * Preparing Kubernetes v1.23.4 on Docker 20.10.12 ... I0221 08:55:44.924423 239635 cli_runner.go:133] Run: docker network inspect false-20220221084934-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 08:55:44.958433 239635 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts I0221 08:55:44.961770 239635 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 08:55:44.973175 239635 out.go:176] - kubelet.housekeeping-interval=5m I0221 08:55:44.973239 239635 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 08:55:44.973284 239635 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0221 08:55:45.006776 239635 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.4 k8s.gcr.io/kube-proxy:v1.23.4 k8s.gcr.io/kube-controller-manager:v1.23.4 k8s.gcr.io/kube-scheduler:v1.23.4 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0221 08:55:45.006797 239635 docker.go:537] Images already preloaded, skipping extraction I0221 08:55:45.006840 239635 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0221 08:55:45.041682 239635 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.4 k8s.gcr.io/kube-proxy:v1.23.4 k8s.gcr.io/kube-scheduler:v1.23.4 k8s.gcr.io/kube-controller-manager:v1.23.4 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0221 08:55:45.041706 239635 cache_images.go:84] Images are preloaded, skipping loading I0221 08:55:45.041748 239635 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0221 08:55:45.126963 239635 cni.go:93] Creating CNI manager for "false" I0221 08:55:45.127028 239635 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0221 08:55:45.127049 239635 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:false-20220221084934-6550 NodeName:false-20220221084934-6550 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0221 08:55:45.127209 239635 kubeadm.go:162] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "false-20220221084934-6550" kubeletExtraArgs: node-ip: 192.168.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.23.4 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0221 08:55:45.127323 239635 kubeadm.go:936] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.23.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=false-20220221084934-6550 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 [Install] config: {KubernetesVersion:v1.23.4 ClusterName:false-20220221084934-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} I0221 08:55:45.127402 239635 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.4 I0221 08:55:45.134691 239635 binaries.go:44] Found k8s binaries, skipping transfer I0221 08:55:45.134765 239635 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0221 08:55:45.142486 239635 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes) I0221 08:55:45.155729 239635 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0221 08:55:45.169105 239635 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2047 bytes) I0221 08:55:45.182350 239635 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0221 08:55:45.185390 239635 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 08:55:45.195280 239635 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550 for IP: 192.168.49.2 I0221 08:55:45.195372 239635 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key I0221 08:55:45.195409 239635 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key I0221 08:55:45.195467 239635 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.key I0221 08:55:45.195482 239635 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt with IP's: [] I0221 08:55:45.464966 239635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt ... I0221 08:55:45.465001 239635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: {Name:mkdc6c86a484bb695bb258b5feb6185d1eb29a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:55:45.465219 239635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.key ... I0221 08:55:45.465236 239635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.key: {Name:mk5469692e52021cfcc273116a99298fde294eb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:55:45.465326 239635 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/apiserver.key.dd3b5fb2 I0221 08:55:45.465343 239635 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1] I0221 08:55:45.532099 239635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/apiserver.crt.dd3b5fb2 ... I0221 08:55:45.532131 239635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/apiserver.crt.dd3b5fb2: {Name:mk4f9bddf2d8f47495bb7872a93a24c91d949bce Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:55:45.532299 239635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/apiserver.key.dd3b5fb2 ... I0221 08:55:45.532312 239635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/apiserver.key.dd3b5fb2: {Name:mk1d78c9de9f8825620a84ece39b31abc4ec2d53 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:55:45.532389 239635 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/apiserver.crt I0221 08:55:45.532442 239635 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/apiserver.key I0221 08:55:45.532496 239635 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/proxy-client.key I0221 08:55:45.532509 239635 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/proxy-client.crt with IP's: [] I0221 08:55:45.611100 239635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/proxy-client.crt ... I0221 08:55:45.611129 239635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/proxy-client.crt: {Name:mkbcc271b992841575783c9f82cdc99f41db88f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:55:45.611296 239635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/proxy-client.key ... I0221 08:55:45.611310 239635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/proxy-client.key: {Name:mkfdc42be1b4c25c7176716ce7c9f5ed6c0ed3ea Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:55:45.611463 239635 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem (1338 bytes) W0221 08:55:45.611499 239635 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550_empty.pem, impossibly tiny 0 bytes I0221 08:55:45.611506 239635 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem (1679 bytes) I0221 08:55:45.611532 239635 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem (1082 bytes) I0221 08:55:45.611557 239635 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem (1123 bytes) I0221 08:55:45.611581 239635 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem (1675 bytes) I0221 08:55:45.611628 239635 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem (1708 bytes) I0221 08:55:45.612674 239635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0221 08:55:45.631353 239635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes) I0221 08:55:45.648917 239635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0221 08:55:45.666591 239635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I0221 08:55:45.684159 239635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0221 08:55:45.701818 239635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0221 08:55:45.719473 239635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0221 08:55:45.737014 239635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0221 08:55:45.754513 239635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /usr/share/ca-certificates/65502.pem (1708 bytes) I0221 08:55:45.772036 239635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0221 08:55:45.789829 239635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem --> /usr/share/ca-certificates/6550.pem (1338 bytes) I0221 08:55:45.807112 239635 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0221 08:55:45.819894 239635 ssh_runner.go:195] Run: openssl version I0221 08:55:45.824776 239635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65502.pem && ln -fs /usr/share/ca-certificates/65502.pem /etc/ssl/certs/65502.pem" I0221 08:55:45.832744 239635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65502.pem I0221 08:55:45.835837 239635 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Feb 21 08:30 /usr/share/ca-certificates/65502.pem I0221 08:55:45.835890 239635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65502.pem I0221 08:55:45.840887 239635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65502.pem /etc/ssl/certs/3ec20f2e.0" I0221 08:55:45.848695 239635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0221 08:55:45.856224 239635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0221 08:55:45.859423 239635 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb 21 08:26 /usr/share/ca-certificates/minikubeCA.pem I0221 08:55:45.859475 239635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0221 08:55:45.864292 239635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0221 08:55:45.871752 239635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6550.pem && ln -fs /usr/share/ca-certificates/6550.pem /etc/ssl/certs/6550.pem" I0221 08:55:45.878978 239635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6550.pem I0221 08:55:45.881994 239635 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Feb 21 08:30 /usr/share/ca-certificates/6550.pem I0221 08:55:45.882039 239635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6550.pem I0221 08:55:45.886887 239635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6550.pem /etc/ssl/certs/51391683.0" I0221 08:55:45.894360 239635 kubeadm.go:391] StartCluster: {Name:false-20220221084934-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:false-20220221084934-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 08:55:45.894481 239635 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0221 08:55:45.926642 239635 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0221 08:55:45.933993 239635 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0221 08:55:45.941434 239635 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver I0221 08:55:45.941488 239635 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0221 08:55:45.949175 239635 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0221 08:55:45.949230 239635 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0221 08:55:43.583801 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:46.104227 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:43.094975 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:45.594422 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:46.578129 239635 out.go:203] - Generating certificates and keys ... I0221 08:55:46.949101 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:55:48.608316 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:51.082452 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:48.094138 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:50.094339 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:49.236357 239635 out.go:203] - Booting up control plane ... I0221 08:55:49.985044 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:55:53.019690 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:55:53.082812 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:55.604982 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:52.593954 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:55.094041 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:57.094158 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:57.284223 239635 out.go:203] - Configuring RBAC rules ... I0221 08:55:57.697593 239635 cni.go:93] Creating CNI manager for "false" I0221 08:55:57.697659 239635 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0221 08:55:57.697727 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:57.697758 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=9a4c39b14752448ee8b5da7f8bb397c2a16d9ea9 minikube.k8s.io/name=false-20220221084934-6550 minikube.k8s.io/updated_at=2022_02_21T08_55_57_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:57.827880 239635 ops.go:34] apiserver oom_adj: -16 I0221 08:55:57.827972 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:56.060744 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:55:59.104251 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:55:58.083480 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:00.107900 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:59.594464 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:01.594499 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:58.858347 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:59.358534 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:59.858264 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:00.358933 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:00.857953 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:01.358649 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:01.858853 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:02.358809 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:02.858068 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:02.144689 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:56:02.108600 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:04.109005 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:06.608183 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:03.595044 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:06.096228 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:03.358732 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:03.858149 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:04.358501 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:04.858946 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:05.358259 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:05.858632 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:06.358646 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:06.858739 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:07.358889 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:07.858256 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:05.187189 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:56:08.234700 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:56:08.358051 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:08.858255 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:09.358060 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:09.858891 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:10.358357 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:10.858913 239635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:56:10.944981 239635 kubeadm.go:1020] duration metric: took 13.247304119s to wait for elevateKubeSystemPrivileges. I0221 08:56:10.945020 239635 kubeadm.go:393] StartCluster complete in 25.050665602s I0221 08:56:10.945040 239635 settings.go:142] acquiring lock: {Name:mk4400923ef35d7d80e21aa000bc7683aef0fb09 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:56:10.945157 239635 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 08:56:10.947338 239635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig: {Name:mkbaf7b3f4ffbe8a9d57707d423380523fa909e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:56:11.510362 239635 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "false-20220221084934-6550" rescaled to 1 I0221 08:56:11.510417 239635 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true} I0221 08:56:11.512591 239635 out.go:176] * Verifying Kubernetes components... I0221 08:56:11.510487 239635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0221 08:56:11.510508 239635 addons.go:415] enableAddons start: toEnable=map[], additional=[] I0221 08:56:11.512794 239635 addons.go:65] Setting storage-provisioner=true in profile "false-20220221084934-6550" I0221 08:56:11.512817 239635 addons.go:153] Setting addon storage-provisioner=true in "false-20220221084934-6550" W0221 08:56:11.512823 239635 addons.go:165] addon storage-provisioner should already be in state true I0221 08:56:11.510662 239635 config.go:176] Loaded profile config "false-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 08:56:11.512851 239635 host.go:66] Checking if "false-20220221084934-6550" exists ... I0221 08:56:11.512662 239635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0221 08:56:11.512894 239635 addons.go:65] Setting default-storageclass=true in profile "false-20220221084934-6550" I0221 08:56:11.512913 239635 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "false-20220221084934-6550" I0221 08:56:11.513247 239635 cli_runner.go:133] Run: docker container inspect false-20220221084934-6550 --format={{.State.Status}} I0221 08:56:11.513326 239635 cli_runner.go:133] Run: docker container inspect false-20220221084934-6550 --format={{.State.Status}} I0221 08:56:11.545075 239635 node_ready.go:35] waiting up to 5m0s for node "false-20220221084934-6550" to be "Ready" ... I0221 08:56:11.551425 239635 node_ready.go:49] node "false-20220221084934-6550" has status "Ready":"True" I0221 08:56:11.551454 239635 node_ready.go:38] duration metric: took 6.346806ms waiting for node "false-20220221084934-6550" to be "Ready" ... I0221 08:56:11.551465 239635 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 08:56:09.083257 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:11.584369 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:08.594008 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:10.594274 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:11.571730 239635 out.go:176] - Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0221 08:56:11.571897 239635 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml I0221 08:56:11.571919 239635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0221 08:56:11.571975 239635 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220221084934-6550 I0221 08:56:11.575024 239635 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-9k8b6" in "kube-system" namespace to be "Ready" ... I0221 08:56:11.605352 239635 addons.go:153] Setting addon default-storageclass=true in "false-20220221084934-6550" W0221 08:56:11.605400 239635 addons.go:165] addon default-storageclass should already be in state true I0221 08:56:11.605431 239635 host.go:66] Checking if "false-20220221084934-6550" exists ... I0221 08:56:11.605905 239635 cli_runner.go:133] Run: docker container inspect false-20220221084934-6550 --format={{.State.Status}} I0221 08:56:11.649737 239635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49374 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/false-20220221084934-6550/id_rsa Username:docker} I0221 08:56:11.670390 239635 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml I0221 08:56:11.670419 239635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0221 08:56:11.670473 239635 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220221084934-6550 I0221 08:56:11.714040 239635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49374 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/false-20220221084934-6550/id_rsa Username:docker} I0221 08:56:11.731413 239635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0221 08:56:11.935089 239635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0221 08:56:11.937307 239635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0221 08:56:13.219947 239635 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.48849471s) I0221 08:56:13.219982 239635 start.go:777] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS I0221 08:56:13.307245 239635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.372085112s) I0221 08:56:13.307328 239635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.369989228s) I0221 08:56:11.277437 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:56:13.309383 239635 out.go:176] * Enabled addons: storage-provisioner, default-storageclass I0221 08:56:13.309406 239635 addons.go:417] enableAddons completed in 1.798918883s I0221 08:56:13.620775 239635 pod_ready.go:102] pod "coredns-64897985d-9k8b6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:14.119861 239635 pod_ready.go:92] pod "coredns-64897985d-9k8b6" in "kube-system" namespace has status "Ready":"True" I0221 08:56:14.119953 239635 pod_ready.go:81] duration metric: took 2.544859967s waiting for pod "coredns-64897985d-9k8b6" in "kube-system" namespace to be "Ready" ... I0221 08:56:14.119988 239635 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-snkv2" in "kube-system" namespace to be "Ready" ... I0221 08:56:14.125349 239635 pod_ready.go:92] pod "coredns-64897985d-snkv2" in "kube-system" namespace has status "Ready":"True" I0221 08:56:14.125379 239635 pod_ready.go:81] duration metric: took 5.372629ms waiting for pod "coredns-64897985d-snkv2" in "kube-system" namespace to be "Ready" ... I0221 08:56:14.125392 239635 pod_ready.go:78] waiting up to 5m0s for pod "etcd-false-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:56:14.130521 239635 pod_ready.go:92] pod "etcd-false-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 08:56:14.130548 239635 pod_ready.go:81] duration metric: took 5.148117ms waiting for pod "etcd-false-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:56:14.130573 239635 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-false-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:56:14.136032 239635 pod_ready.go:92] pod "kube-apiserver-false-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 08:56:14.136057 239635 pod_ready.go:81] duration metric: took 5.474386ms waiting for pod "kube-apiserver-false-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:56:14.136070 239635 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-false-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:56:14.141889 239635 pod_ready.go:92] pod "kube-controller-manager-false-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 08:56:14.141912 239635 pod_ready.go:81] duration metric: took 5.834181ms waiting for pod "kube-controller-manager-false-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:56:14.141931 239635 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-mlfhq" in "kube-system" namespace to be "Ready" ... I0221 08:56:14.517663 239635 pod_ready.go:92] pod "kube-proxy-mlfhq" in "kube-system" namespace has status "Ready":"True" I0221 08:56:14.517685 239635 pod_ready.go:81] duration metric: took 375.747262ms waiting for pod "kube-proxy-mlfhq" in "kube-system" namespace to be "Ready" ... I0221 08:56:14.517694 239635 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-false-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:56:14.917218 239635 pod_ready.go:92] pod "kube-scheduler-false-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 08:56:14.917240 239635 pod_ready.go:81] duration metric: took 399.540555ms waiting for pod "kube-scheduler-false-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:56:14.917249 239635 pod_ready.go:38] duration metric: took 3.365771221s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 08:56:14.917273 239635 api_server.go:51] waiting for apiserver process to appear ... I0221 08:56:14.917314 239635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0221 08:56:14.945692 239635 api_server.go:71] duration metric: took 3.435246847s to wait for apiserver process to appear ... I0221 08:56:14.945777 239635 api_server.go:87] waiting for apiserver healthz status ... I0221 08:56:14.945798 239635 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0221 08:56:14.951742 239635 api_server.go:266] https://192.168.49.2:8443/healthz returned 200: ok I0221 08:56:14.952909 239635 api_server.go:140] control plane version: v1.23.4 I0221 08:56:14.952937 239635 api_server.go:130] duration metric: took 7.147051ms to wait for apiserver health ... I0221 08:56:14.952948 239635 system_pods.go:43] waiting for kube-system pods to appear ... I0221 08:56:15.120310 239635 system_pods.go:59] 8 kube-system pods found I0221 08:56:15.120352 239635 system_pods.go:61] "coredns-64897985d-9k8b6" [7231ddf1-a325-4916-8188-6516121331ce] Running I0221 08:56:15.120358 239635 system_pods.go:61] "coredns-64897985d-snkv2" [2ca2a7a8-2903-47ca-bcf3-097175f8bc79] Running I0221 08:56:15.120364 239635 system_pods.go:61] "etcd-false-20220221084934-6550" [85157cb6-493b-47f3-a078-9c7f3086c0ae] Running I0221 08:56:15.120373 239635 system_pods.go:61] "kube-apiserver-false-20220221084934-6550" [bd7518d6-e2db-4f22-9f37-fa5831613936] Running I0221 08:56:15.120380 239635 system_pods.go:61] "kube-controller-manager-false-20220221084934-6550" [0bca9a27-5e63-4cd7-8c81-e56c354e24da] Running I0221 08:56:15.120389 239635 system_pods.go:61] "kube-proxy-mlfhq" [b1256bd2-9a7f-4f1f-861d-1eedacb992be] Running I0221 08:56:15.120395 239635 system_pods.go:61] "kube-scheduler-false-20220221084934-6550" [4f15dbe8-f5f0-4895-a7a2-ca7d40a0e148] Running I0221 08:56:15.120408 239635 system_pods.go:61] "storage-provisioner" [e58a0e76-397e-4653-82c8-a63621513203] Running I0221 08:56:15.120414 239635 system_pods.go:74] duration metric: took 167.460543ms to wait for pod list to return data ... I0221 08:56:15.120423 239635 default_sa.go:34] waiting for default service account to be created ... I0221 08:56:15.317796 239635 default_sa.go:45] found service account: "default" I0221 08:56:15.317819 239635 default_sa.go:55] duration metric: took 197.391118ms for default service account to be created ... I0221 08:56:15.317826 239635 system_pods.go:116] waiting for k8s-apps to be running ... I0221 08:56:15.520020 239635 system_pods.go:86] 8 kube-system pods found I0221 08:56:15.520058 239635 system_pods.go:89] "coredns-64897985d-9k8b6" [7231ddf1-a325-4916-8188-6516121331ce] Running I0221 08:56:15.520067 239635 system_pods.go:89] "coredns-64897985d-snkv2" [2ca2a7a8-2903-47ca-bcf3-097175f8bc79] Running I0221 08:56:15.520073 239635 system_pods.go:89] "etcd-false-20220221084934-6550" [85157cb6-493b-47f3-a078-9c7f3086c0ae] Running I0221 08:56:15.520080 239635 system_pods.go:89] "kube-apiserver-false-20220221084934-6550" [bd7518d6-e2db-4f22-9f37-fa5831613936] Running I0221 08:56:15.520088 239635 system_pods.go:89] "kube-controller-manager-false-20220221084934-6550" [0bca9a27-5e63-4cd7-8c81-e56c354e24da] Running I0221 08:56:15.520099 239635 system_pods.go:89] "kube-proxy-mlfhq" [b1256bd2-9a7f-4f1f-861d-1eedacb992be] Running I0221 08:56:15.520110 239635 system_pods.go:89] "kube-scheduler-false-20220221084934-6550" [4f15dbe8-f5f0-4895-a7a2-ca7d40a0e148] Running I0221 08:56:15.520121 239635 system_pods.go:89] "storage-provisioner" [e58a0e76-397e-4653-82c8-a63621513203] Running I0221 08:56:15.520133 239635 system_pods.go:126] duration metric: took 202.301442ms to wait for k8s-apps to be running ... I0221 08:56:15.520146 239635 system_svc.go:44] waiting for kubelet service to be running .... I0221 08:56:15.520194 239635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0221 08:56:15.531798 239635 system_svc.go:56] duration metric: took 11.647388ms WaitForService to wait for kubelet. I0221 08:56:15.531849 239635 kubeadm.go:548] duration metric: took 4.021409458s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ... I0221 08:56:15.531874 239635 node_conditions.go:102] verifying NodePressure condition ... I0221 08:56:15.718468 239635 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki I0221 08:56:15.718504 239635 node_conditions.go:123] node cpu capacity is 8 I0221 08:56:15.718520 239635 node_conditions.go:105] duration metric: took 186.636719ms to run NodePressure ... I0221 08:56:15.718534 239635 start.go:213] waiting for startup goroutines ... I0221 08:56:15.765073 239635 start.go:496] kubectl: 1.23.4, cluster: 1.23.4 (minor skew: 0) I0221 08:56:15.767772 239635 out.go:176] * Done! kubectl is now configured to use "false-20220221084934-6550" cluster and "default" namespace by default I0221 08:56:13.603328 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:15.607461 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:12.594837 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:15.094474 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:17.095174 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:14.327798 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:56:17.363160 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:56:17.608185 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:20.103368 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:19.595203 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:22.094022 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:20.398233 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:56:23.439137 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:56:22.106959 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:24.109509 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:26.606973 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:24.094532 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:26.594351 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:26.477627 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:56:28.607609 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:31.082276 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:29.094290 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:31.595545 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:29.516965 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:56:32.555149 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:56:33.107320 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:35.583226 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:34.094168 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:36.094581 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:35.592238 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:56:38.627148 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:56:38.107435 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:40.606736 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:38.593443 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:40.593849 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:41.663147 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:56:43.082434 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:45.107171 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:42.594084 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:44.594768 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:47.093943 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:44.699280 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:56:47.744968 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:56:47.583447 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:49.608204 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:51.608560 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:49.593364 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:51.593995 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:50.783109 208829 stop.go:59] stop err: Maximum number of retries (60) exceeded I0221 08:56:50.783155 208829 delete.go:129] stophost failed (probably ok): Temporary Error: stop: Maximum number of retries (60) exceeded I0221 08:56:50.783567 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} W0221 08:56:50.818323 208829 delete.go:135] deletehost failed: Docker machine "auto-20220221084933-6550" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one. I0221 08:56:50.818397 208829 cli_runner.go:133] Run: docker container inspect -f {{.Id}} auto-20220221084933-6550 I0221 08:56:50.852482 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:56:50.885015 208829 cli_runner.go:133] Run: docker exec --privileged -t auto-20220221084933-6550 /bin/bash -c "sudo init 0" W0221 08:56:50.919078 208829 cli_runner.go:180] docker exec --privileged -t auto-20220221084933-6550 /bin/bash -c "sudo init 0" returned with exit code 1 I0221 08:56:50.919109 208829 oci.go:659] error shutdown auto-20220221084933-6550: docker exec --privileged -t auto-20220221084933-6550 /bin/bash -c "sudo init 0": exit status 1 stdout: stderr: Error response from daemon: Container 00857a088a82e39c05eb12c3d7fa364b17041e9ecbb348b20a1e952ed4c1fb54 is not running I0221 08:56:51.920214 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:56:51.954544 208829 oci.go:673] temporary error: container auto-20220221084933-6550 status is but expect it to be exited I0221 08:56:51.954575 208829 oci.go:679] Successfully shutdown container auto-20220221084933-6550 I0221 08:56:51.954633 208829 cli_runner.go:133] Run: docker rm -f -v auto-20220221084933-6550 I0221 08:56:51.995652 208829 cli_runner.go:133] Run: docker container inspect -f {{.Id}} auto-20220221084933-6550 W0221 08:56:52.030780 208829 cli_runner.go:180] docker container inspect -f {{.Id}} auto-20220221084933-6550 returned with exit code 1 I0221 08:56:52.030857 208829 cli_runner.go:133] Run: docker network inspect auto-20220221084933-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0221 08:56:52.064402 208829 cli_runner.go:180] docker network inspect auto-20220221084933-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0221 08:56:52.064463 208829 network_create.go:254] running [docker network inspect auto-20220221084933-6550] to gather additional debugging logs... I0221 08:56:52.064477 208829 cli_runner.go:133] Run: docker network inspect auto-20220221084933-6550 W0221 08:56:52.098766 208829 cli_runner.go:180] docker network inspect auto-20220221084933-6550 returned with exit code 1 I0221 08:56:52.098796 208829 network_create.go:257] error running [docker network inspect auto-20220221084933-6550]: docker network inspect auto-20220221084933-6550: exit status 1 stdout: [] stderr: Error: No such network: auto-20220221084933-6550 I0221 08:56:52.098812 208829 network_create.go:259] output of [docker network inspect auto-20220221084933-6550]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: auto-20220221084933-6550 ** /stderr ** W0221 08:56:52.098950 208829 delete.go:139] delete failed (probably ok) I0221 08:56:52.098962 208829 fix.go:120] Sleeping 1 second for extra luck! I0221 08:56:53.099096 208829 start.go:126] createHost starting for "" (driver="docker") I0221 08:56:53.102600 208829 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ... I0221 08:56:53.102747 208829 start.go:160] libmachine.API.Create for "auto-20220221084933-6550" (driver="docker") I0221 08:56:53.102794 208829 client.go:168] LocalClient.Create starting I0221 08:56:53.102899 208829 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem I0221 08:56:53.102945 208829 main.go:130] libmachine: Decoding PEM data... I0221 08:56:53.102970 208829 main.go:130] libmachine: Parsing certificate... I0221 08:56:53.103057 208829 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem I0221 08:56:53.103082 208829 main.go:130] libmachine: Decoding PEM data... I0221 08:56:53.103096 208829 main.go:130] libmachine: Parsing certificate... I0221 08:56:53.103314 208829 cli_runner.go:133] Run: docker network inspect auto-20220221084933-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0221 08:56:53.136740 208829 cli_runner.go:180] docker network inspect auto-20220221084933-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0221 08:56:53.136805 208829 network_create.go:254] running [docker network inspect auto-20220221084933-6550] to gather additional debugging logs... I0221 08:56:53.136820 208829 cli_runner.go:133] Run: docker network inspect auto-20220221084933-6550 W0221 08:56:53.169853 208829 cli_runner.go:180] docker network inspect auto-20220221084933-6550 returned with exit code 1 I0221 08:56:53.169885 208829 network_create.go:257] error running [docker network inspect auto-20220221084933-6550]: docker network inspect auto-20220221084933-6550: exit status 1 stdout: [] stderr: Error: No such network: auto-20220221084933-6550 I0221 08:56:53.169901 208829 network_create.go:259] output of [docker network inspect auto-20220221084933-6550]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: auto-20220221084933-6550 ** /stderr ** I0221 08:56:53.169943 208829 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 08:56:53.204718 208829 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-3aad4971443d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:75:24:60:d8}} I0221 08:56:53.205609 208829 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-8f04c0f799cd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:40:4a:89:16}} I0221 08:56:53.206406 208829 network.go:240] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-259ea390e559 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:d1:27:54:57}} I0221 08:56:53.207351 208829 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc0002b64c0 192.168.76.0:0xc0002b6468] misses:0} I0221 08:56:53.207411 208829 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}} I0221 08:56:53.207422 208829 network_create.go:106] attempt to create docker network auto-20220221084933-6550 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ... I0221 08:56:53.207482 208829 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220221084933-6550 I0221 08:56:53.282938 208829 network_create.go:90] docker network auto-20220221084933-6550 192.168.76.0/24 created I0221 08:56:53.282974 208829 kic.go:106] calculated static IP "192.168.76.2" for the "auto-20220221084933-6550" container I0221 08:56:53.283110 208829 cli_runner.go:133] Run: docker ps -a --format {{.Names}} I0221 08:56:53.324582 208829 cli_runner.go:133] Run: docker volume create auto-20220221084933-6550 --label name.minikube.sigs.k8s.io=auto-20220221084933-6550 --label created_by.minikube.sigs.k8s.io=true I0221 08:56:53.368627 208829 oci.go:102] Successfully created a docker volume auto-20220221084933-6550 I0221 08:56:53.368710 208829 cli_runner.go:133] Run: docker run --rm --name auto-20220221084933-6550-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220221084933-6550 --entrypoint /usr/bin/test -v auto-20220221084933-6550:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -d /var/lib I0221 08:56:53.890381 208829 oci.go:106] Successfully prepared a docker volume auto-20220221084933-6550 I0221 08:56:53.890421 208829 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 08:56:53.890441 208829 kic.go:179] Starting extracting preloaded images to volume ... I0221 08:56:53.890510 208829 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20220221084933-6550:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir I0221 08:56:54.108380 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:56.583351 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:53.594291 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:55.594982 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:59.083417 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:01.108902 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:57.595281 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:00.095968 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:59.757141 208829 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20220221084933-6550:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir: (5.866593931s) I0221 08:56:59.757171 208829 kic.go:188] duration metric: took 5.866728 seconds to extract preloaded images to volume W0221 08:56:59.757217 208829 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted. W0221 08:56:59.757234 208829 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. I0221 08:56:59.757273 208829 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'" I0221 08:56:59.893338 208829 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20220221084933-6550 --name auto-20220221084933-6550 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220221084933-6550 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20220221084933-6550 --network auto-20220221084933-6550 --ip 192.168.76.2 --volume auto-20220221084933-6550:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 I0221 08:57:00.408893 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Running}} I0221 08:57:00.450206 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:57:00.509150 208829 cli_runner.go:133] Run: docker exec auto-20220221084933-6550 stat /var/lib/dpkg/alternatives/iptables I0221 08:57:00.577744 208829 oci.go:281] the created container "auto-20220221084933-6550" has a running status. I0221 08:57:00.577773 208829 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/auto-20220221084933-6550/id_rsa... I0221 08:57:00.682193 208829 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/auto-20220221084933-6550/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0221 08:57:00.791460 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:57:00.836336 208829 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0221 08:57:00.836364 208829 kic_runner.go:114] Args: [docker exec --privileged auto-20220221084933-6550 chown docker:docker /home/docker/.ssh/authorized_keys] I0221 08:57:00.939165 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:57:00.987113 208829 machine.go:88] provisioning docker machine ... I0221 08:57:00.987158 208829 ubuntu.go:169] provisioning hostname "auto-20220221084933-6550" I0221 08:57:00.987220 208829 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220221084933-6550 I0221 08:57:01.032031 208829 main.go:130] libmachine: Using SSH client type: native I0221 08:57:01.032362 208829 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49379 } I0221 08:57:01.032391 208829 main.go:130] libmachine: About to run SSH command: sudo hostname auto-20220221084933-6550 && echo "auto-20220221084933-6550" | sudo tee /etc/hostname I0221 08:57:01.178093 208829 main.go:130] libmachine: SSH cmd err, output: : auto-20220221084933-6550 I0221 08:57:01.178171 208829 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220221084933-6550 I0221 08:57:01.217179 208829 main.go:130] libmachine: Using SSH client type: native I0221 08:57:01.217336 208829 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49379 } I0221 08:57:01.217356 208829 main.go:130] libmachine: About to run SSH command: if ! grep -xq '.*\sauto-20220221084933-6550' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-20220221084933-6550/g' /etc/hosts; else echo '127.0.1.1 auto-20220221084933-6550' | sudo tee -a /etc/hosts; fi fi I0221 08:57:01.347912 208829 main.go:130] libmachine: SSH cmd err, output: : I0221 08:57:01.347952 208829 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube} I0221 08:57:01.348022 208829 ubuntu.go:177] setting up certificates I0221 08:57:01.348044 208829 provision.go:83] configureAuth start I0221 08:57:01.348098 208829 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20220221084933-6550 I0221 08:57:01.383523 208829 provision.go:138] copyHostCerts I0221 08:57:01.383615 208829 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem, removing ... I0221 08:57:01.383628 208829 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem I0221 08:57:01.383688 208829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem (1675 bytes) I0221 08:57:01.383771 208829 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem, removing ... I0221 08:57:01.383783 208829 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem I0221 08:57:01.383804 208829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem (1082 bytes) I0221 08:57:01.384445 208829 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem, removing ... I0221 08:57:01.384509 208829 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem I0221 08:57:01.384564 208829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem (1123 bytes) I0221 08:57:01.384699 208829 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem org=jenkins.auto-20220221084933-6550 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube auto-20220221084933-6550] I0221 08:57:01.504349 208829 provision.go:172] copyRemoteCerts I0221 08:57:01.504402 208829 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0221 08:57:01.504434 208829 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220221084933-6550 I0221 08:57:01.538951 208829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49379 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/auto-20220221084933-6550/id_rsa Username:docker} I0221 08:57:01.626693 208829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes) I0221 08:57:01.644880 208829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes) I0221 08:57:01.663373 208829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0221 08:57:01.681925 208829 provision.go:86] duration metric: configureAuth took 333.866692ms I0221 08:57:01.681956 208829 ubuntu.go:193] setting minikube options for container-runtime I0221 08:57:01.682119 208829 config.go:176] Loaded profile config "auto-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 08:57:01.682172 208829 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220221084933-6550 I0221 08:57:01.716679 208829 main.go:130] libmachine: Using SSH client type: native I0221 08:57:01.716831 208829 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49379 } I0221 08:57:01.716844 208829 main.go:130] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0221 08:57:01.839716 208829 main.go:130] libmachine: SSH cmd err, output: : overlay I0221 08:57:01.839749 208829 ubuntu.go:71] root file system type: overlay I0221 08:57:01.839983 208829 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0221 08:57:01.840047 208829 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220221084933-6550 I0221 08:57:01.884181 208829 main.go:130] libmachine: Using SSH client type: native I0221 08:57:01.884320 208829 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49379 } I0221 08:57:01.884394 208829 main.go:130] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0221 08:57:02.018366 208829 main.go:130] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0221 08:57:02.018469 208829 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220221084933-6550 I0221 08:57:02.061379 208829 main.go:130] libmachine: Using SSH client type: native I0221 08:57:02.061568 208829 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49379 } I0221 08:57:02.061598 208829 main.go:130] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0221 08:57:02.838157 208829 main.go:130] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2021-12-13 11:43:42.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2022-02-21 08:57:02.011813270 +0000 @@ -1,30 +1,32 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com +BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target -Requires=docker.socket containerd.service +Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always - -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 +Restart=on-failure -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +34,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0221 08:57:02.838207 208829 machine.go:91] provisioned docker machine in 1.851064371s I0221 08:57:02.838217 208829 client.go:171] LocalClient.Create took 9.735413411s I0221 08:57:02.838234 208829 start.go:168] duration metric: libmachine.API.Create for "auto-20220221084933-6550" took 9.735486959s I0221 08:57:02.838242 208829 start.go:267] post-start starting for "auto-20220221084933-6550" (driver="docker") I0221 08:57:02.838250 208829 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0221 08:57:02.838307 208829 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0221 08:57:02.838350 208829 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220221084933-6550 I0221 08:57:02.874473 208829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49379 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/auto-20220221084933-6550/id_rsa Username:docker} I0221 08:57:02.968610 208829 ssh_runner.go:195] Run: cat /etc/os-release I0221 08:57:02.972155 208829 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0221 08:57:02.972187 208829 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0221 08:57:02.972200 208829 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0221 08:57:02.972207 208829 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0221 08:57:02.972221 208829 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/addons for local assets ... I0221 08:57:02.972277 208829 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files for local assets ... I0221 08:57:02.972364 208829 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem -> 65502.pem in /etc/ssl/certs I0221 08:57:02.972460 208829 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs I0221 08:57:02.982082 208829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /etc/ssl/certs/65502.pem (1708 bytes) I0221 08:57:03.032306 208829 start.go:270] post-start completed in 194.048524ms I0221 08:57:03.032660 208829 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20220221084933-6550 I0221 08:57:03.072584 208829 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/config.json ... I0221 08:57:03.072847 208829 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0221 08:57:03.072892 208829 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220221084933-6550 I0221 08:57:03.110734 208829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49379 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/auto-20220221084933-6550/id_rsa Username:docker} I0221 08:57:03.199585 208829 start.go:129] duration metric: createHost completed in 10.100444989s I0221 08:57:03.199664 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} W0221 08:57:03.240885 208829 fix.go:134] unexpected machine state, will restart: I0221 08:57:03.240925 208829 machine.go:88] provisioning docker machine ... I0221 08:57:03.240947 208829 ubuntu.go:169] provisioning hostname "auto-20220221084933-6550" I0221 08:57:03.241037 208829 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220221084933-6550 I0221 08:57:03.279603 208829 main.go:130] libmachine: Using SSH client type: native I0221 08:57:03.279808 208829 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49379 } I0221 08:57:03.279836 208829 main.go:130] libmachine: About to run SSH command: sudo hostname auto-20220221084933-6550 && echo "auto-20220221084933-6550" | sudo tee /etc/hostname I0221 08:57:03.418493 208829 main.go:130] libmachine: SSH cmd err, output: : auto-20220221084933-6550 I0221 08:57:03.418569 208829 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220221084933-6550 I0221 08:57:03.458210 208829 main.go:130] libmachine: Using SSH client type: native I0221 08:57:03.458405 208829 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49379 } I0221 08:57:03.458437 208829 main.go:130] libmachine: About to run SSH command: if ! grep -xq '.*\sauto-20220221084933-6550' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-20220221084933-6550/g' /etc/hosts; else echo '127.0.1.1 auto-20220221084933-6550' | sudo tee -a /etc/hosts; fi fi I0221 08:57:03.586755 208829 main.go:130] libmachine: SSH cmd err, output: : I0221 08:57:03.586795 208829 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube} I0221 08:57:03.586829 208829 ubuntu.go:177] setting up certificates I0221 08:57:03.586839 208829 provision.go:83] configureAuth start I0221 08:57:03.586896 208829 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20220221084933-6550 I0221 08:57:03.628926 208829 provision.go:138] copyHostCerts I0221 08:57:03.628997 208829 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem, removing ... I0221 08:57:03.629014 208829 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem I0221 08:57:03.629092 208829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem (1082 bytes) I0221 08:57:03.629179 208829 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem, removing ... I0221 08:57:03.629195 208829 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem I0221 08:57:03.629228 208829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem (1123 bytes) I0221 08:57:03.629294 208829 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem, removing ... I0221 08:57:03.629308 208829 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem I0221 08:57:03.629336 208829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem (1675 bytes) I0221 08:57:03.629390 208829 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem org=jenkins.auto-20220221084933-6550 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube auto-20220221084933-6550] I0221 08:57:03.991600 208829 provision.go:172] copyRemoteCerts I0221 08:57:03.991662 208829 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0221 08:57:03.991694 208829 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220221084933-6550 I0221 08:57:04.026718 208829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49379 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/auto-20220221084933-6550/id_rsa Username:docker} I0221 08:57:04.116065 208829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes) I0221 08:57:04.138038 208829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem --> /etc/docker/server.pem (1245 bytes) I0221 08:57:04.160814 208829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0221 08:57:04.180299 208829 provision.go:86] duration metric: configureAuth took 593.439078ms I0221 08:57:04.180335 208829 ubuntu.go:193] setting minikube options for container-runtime I0221 08:57:04.180508 208829 config.go:176] Loaded profile config "auto-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 08:57:04.180555 208829 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220221084933-6550 I0221 08:57:04.218384 208829 main.go:130] libmachine: Using SSH client type: native I0221 08:57:04.218602 208829 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49379 } I0221 08:57:04.218623 208829 main.go:130] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0221 08:57:04.349505 208829 main.go:130] libmachine: SSH cmd err, output: : overlay I0221 08:57:04.349534 208829 ubuntu.go:71] root file system type: overlay I0221 08:57:04.349727 208829 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0221 08:57:04.349790 208829 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220221084933-6550 I0221 08:57:04.387207 208829 main.go:130] libmachine: Using SSH client type: native I0221 08:57:04.387390 208829 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49379 } I0221 08:57:04.387497 208829 main.go:130] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0221 08:57:04.522446 208829 main.go:130] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0221 08:57:04.522540 208829 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220221084933-6550 I0221 08:57:04.564759 208829 main.go:130] libmachine: Using SSH client type: native I0221 08:57:04.564947 208829 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49379 } I0221 08:57:04.564981 208829 main.go:130] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0221 08:57:04.690709 208829 main.go:130] libmachine: SSH cmd err, output: : I0221 08:57:04.690733 208829 machine.go:91] provisioned docker machine in 1.449802307s I0221 08:57:04.690746 208829 start.go:267] post-start starting for "auto-20220221084933-6550" (driver="docker") I0221 08:57:04.690751 208829 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0221 08:57:04.690796 208829 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0221 08:57:04.690832 208829 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220221084933-6550 I0221 08:57:04.737205 208829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49379 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/auto-20220221084933-6550/id_rsa Username:docker} I0221 08:57:04.832817 208829 ssh_runner.go:195] Run: cat /etc/os-release I0221 08:57:04.836638 208829 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0221 08:57:04.836675 208829 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0221 08:57:04.836688 208829 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0221 08:57:04.836695 208829 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0221 08:57:04.836709 208829 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/addons for local assets ... I0221 08:57:04.836773 208829 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files for local assets ... I0221 08:57:04.836854 208829 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem -> 65502.pem in /etc/ssl/certs I0221 08:57:04.836951 208829 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs I0221 08:57:04.848055 208829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /etc/ssl/certs/65502.pem (1708 bytes) I0221 08:57:04.871820 208829 start.go:270] post-start completed in 181.057737ms I0221 08:57:04.871882 208829 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0221 08:57:04.871922 208829 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220221084933-6550 I0221 08:57:04.909364 208829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49379 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/auto-20220221084933-6550/id_rsa Username:docker} I0221 08:57:05.004082 208829 fix.go:57] fixHost completed within 3m17.180729915s I0221 08:57:05.004116 208829 start.go:80] releasing machines lock for "auto-20220221084933-6550", held for 3m17.1807932s I0221 08:57:05.004203 208829 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20220221084933-6550 I0221 08:57:05.058656 208829 ssh_runner.go:195] Run: sudo service containerd status I0221 08:57:05.058691 208829 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/ I0221 08:57:05.058719 208829 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220221084933-6550 I0221 08:57:05.058747 208829 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220221084933-6550 I0221 08:57:05.100435 208829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49379 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/auto-20220221084933-6550/id_rsa Username:docker} I0221 08:57:05.100436 208829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49379 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/auto-20220221084933-6550/id_rsa Username:docker} I0221 08:57:05.208067 208829 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 08:57:05.344794 208829 cruntime.go:272] skipping containerd shutdown because we are bound to it I0221 08:57:05.344863 208829 ssh_runner.go:195] Run: sudo service crio status I0221 08:57:05.371501 208829 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0221 08:57:05.384684 208829 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 08:57:05.394112 208829 ssh_runner.go:195] Run: sudo service docker status I0221 08:57:05.412284 208829 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 08:57:05.462323 208829 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 08:57:05.510852 208829 out.go:203] * Preparing Kubernetes v1.23.4 on Docker 20.10.12 ... I0221 08:57:05.510947 208829 cli_runner.go:133] Run: docker network inspect auto-20220221084933-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 08:57:05.549082 208829 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts I0221 08:57:05.552663 208829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 08:57:03.608727 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:06.083201 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:05.565730 208829 out.go:176] - kubelet.housekeeping-interval=5m I0221 08:57:05.565811 208829 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 08:57:05.565865 208829 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0221 08:57:05.606956 208829 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.4 k8s.gcr.io/kube-proxy:v1.23.4 k8s.gcr.io/kube-controller-manager:v1.23.4 k8s.gcr.io/kube-scheduler:v1.23.4 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0221 08:57:05.607025 208829 docker.go:537] Images already preloaded, skipping extraction I0221 08:57:05.607086 208829 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0221 08:57:05.649928 208829 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.4 k8s.gcr.io/kube-proxy:v1.23.4 k8s.gcr.io/kube-scheduler:v1.23.4 k8s.gcr.io/kube-controller-manager:v1.23.4 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0221 08:57:05.649954 208829 cache_images.go:84] Images are preloaded, skipping loading I0221 08:57:05.649996 208829 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0221 08:57:05.754698 208829 cni.go:93] Creating CNI manager for "" I0221 08:57:05.754720 208829 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0221 08:57:05.754727 208829 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0221 08:57:05.754740 208829 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.23.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-20220221084933-6550 NodeName:auto-20220221084933-6550 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0221 08:57:05.754849 208829 kubeadm.go:162] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.76.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "auto-20220221084933-6550" kubeletExtraArgs: node-ip: 192.168.76.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.76.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.23.4 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0221 08:57:05.754928 208829 kubeadm.go:936] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.23.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=auto-20220221084933-6550 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 [Install] config: {KubernetesVersion:v1.23.4 ClusterName:auto-20220221084933-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0221 08:57:05.754968 208829 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.4 I0221 08:57:05.763653 208829 binaries.go:44] Found k8s binaries, skipping transfer I0221 08:57:05.763829 208829 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /var/lib/minikube /etc/init.d I0221 08:57:05.772841 208829 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes) I0221 08:57:05.788989 208829 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0221 08:57:05.805034 208829 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2046 bytes) I0221 08:57:05.819979 208829 ssh_runner.go:362] scp memory --> /var/lib/minikube/openrc-restart-wrapper.sh (233 bytes) I0221 08:57:05.835873 208829 ssh_runner.go:362] scp memory --> /etc/init.d/kubelet (839 bytes) I0221 08:57:05.850716 208829 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts I0221 08:57:05.854214 208829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 08:57:05.866093 208829 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550 for IP: 192.168.76.2 I0221 08:57:05.866210 208829 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key I0221 08:57:05.866261 208829 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key I0221 08:57:05.866320 208829 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.key I0221 08:57:05.866339 208829 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt with IP's: [] I0221 08:57:05.946527 208829 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt ... I0221 08:57:05.946560 208829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: {Name:mkf66599337a85f926bbf47bc67309a30f586d39 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:57:05.946728 208829 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.key ... I0221 08:57:05.946743 208829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.key: {Name:mk4b98916e364b75e052175ceff980d7dfb7d59c Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:57:05.946835 208829 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/apiserver.key.31bdca25 I0221 08:57:05.946858 208829 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1] I0221 08:57:06.102490 208829 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/apiserver.crt.31bdca25 ... I0221 08:57:06.102522 208829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/apiserver.crt.31bdca25: {Name:mk69d8d8d16b926e465f137654650f785385ca18 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:57:06.102679 208829 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/apiserver.key.31bdca25 ... I0221 08:57:06.102692 208829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/apiserver.key.31bdca25: {Name:mkd2971b67162a2c822475fe096d0b0e4ec0054c Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:57:06.102789 208829 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/apiserver.crt I0221 08:57:06.102842 208829 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/apiserver.key I0221 08:57:06.102884 208829 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/proxy-client.key I0221 08:57:06.102898 208829 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/proxy-client.crt with IP's: [] I0221 08:57:06.201893 208829 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/proxy-client.crt ... I0221 08:57:06.201927 208829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/proxy-client.crt: {Name:mk80af5c6cf1913702c41b816aa4d84fc4ef770d Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:57:06.202100 208829 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/proxy-client.key ... I0221 08:57:06.202114 208829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/proxy-client.key: {Name:mk0ab9a45b46877a73f835519f0bc8a4becdda03 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:57:06.202272 208829 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem (1338 bytes) W0221 08:57:06.202308 208829 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550_empty.pem, impossibly tiny 0 bytes I0221 08:57:06.202319 208829 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem (1679 bytes) I0221 08:57:06.202344 208829 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem (1082 bytes) I0221 08:57:06.202367 208829 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem (1123 bytes) I0221 08:57:06.202393 208829 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem (1675 bytes) I0221 08:57:06.202432 208829 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem (1708 bytes) I0221 08:57:06.203351 208829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0221 08:57:06.222080 208829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes) I0221 08:57:06.240034 208829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0221 08:57:06.257937 208829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I0221 08:57:06.275915 208829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0221 08:57:06.294110 208829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0221 08:57:06.312374 208829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0221 08:57:06.335313 208829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0221 08:57:06.357500 208829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem --> /usr/share/ca-certificates/6550.pem (1338 bytes) I0221 08:57:06.377803 208829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /usr/share/ca-certificates/65502.pem (1708 bytes) I0221 08:57:06.396818 208829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0221 08:57:06.416836 208829 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0221 08:57:06.432177 208829 ssh_runner.go:195] Run: openssl version I0221 08:57:06.438394 208829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6550.pem && ln -fs /usr/share/ca-certificates/6550.pem /etc/ssl/certs/6550.pem" I0221 08:57:06.446874 208829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6550.pem I0221 08:57:06.450267 208829 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Feb 21 08:30 /usr/share/ca-certificates/6550.pem I0221 08:57:06.450322 208829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6550.pem I0221 08:57:06.456518 208829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6550.pem /etc/ssl/certs/51391683.0" I0221 08:57:06.465032 208829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65502.pem && ln -fs /usr/share/ca-certificates/65502.pem /etc/ssl/certs/65502.pem" I0221 08:57:06.473821 208829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65502.pem I0221 08:57:06.478022 208829 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Feb 21 08:30 /usr/share/ca-certificates/65502.pem I0221 08:57:06.478083 208829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65502.pem I0221 08:57:06.484537 208829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65502.pem /etc/ssl/certs/3ec20f2e.0" I0221 08:57:06.493760 208829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0221 08:57:06.501591 208829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0221 08:57:06.505292 208829 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb 21 08:26 /usr/share/ca-certificates/minikubeCA.pem I0221 08:57:06.505349 208829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0221 08:57:06.510784 208829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0221 08:57:06.519268 208829 kubeadm.go:391] StartCluster: {Name:auto-20220221084933-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:auto-20220221084933-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 08:57:06.519384 208829 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0221 08:57:06.558740 208829 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0221 08:57:06.576334 208829 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0221 08:57:06.586331 208829 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver I0221 08:57:06.586391 208829 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0221 08:57:06.595753 208829 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0221 08:57:06.595802 208829 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0221 08:57:02.593875 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:05.095863 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:07.226129 208829 out.go:203] - Generating certificates and keys ... I0221 08:57:08.606947 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:11.085043 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:07.593598 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:09.595599 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:11.600301 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:09.741262 208829 out.go:203] - Booting up control plane ... I0221 08:57:13.606594 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:16.104269 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:14.093831 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:16.094542 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:17.794661 208829 out.go:203] - Configuring RBAC rules ... I0221 08:57:18.211217 208829 cni.go:93] Creating CNI manager for "" I0221 08:57:18.211242 208829 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0221 08:57:18.211265 208829 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0221 08:57:18.211404 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:18.211499 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=9a4c39b14752448ee8b5da7f8bb397c2a16d9ea9 minikube.k8s.io/name=auto-20220221084933-6550 minikube.k8s.io/updated_at=2022_02_21T08_57_18_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:18.237810 208829 ops.go:34] apiserver oom_adj: -16 I0221 08:57:18.404731 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:18.582815 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:20.585066 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:18.094583 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:20.594516 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:19.390100 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:19.889764 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:20.389415 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:20.889525 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:21.389657 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:21.889423 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:22.389317 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:22.889659 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:23.389482 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:23.889350 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:23.083375 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:25.108449 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:23.094746 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:25.094898 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:27.096067 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:24.389561 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:24.890221 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:25.389547 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:25.889416 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:26.389374 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:26.889998 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:27.389296 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:27.889606 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:28.389536 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:28.890229 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:27.607457 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:29.607786 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:29.389938 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:29.890263 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:30.389266 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:30.889421 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:31.389996 208829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:57:31.530744 208829 kubeadm.go:1020] duration metric: took 13.31938415s to wait for elevateKubeSystemPrivileges. I0221 08:57:31.530783 208829 kubeadm.go:393] StartCluster complete in 25.011523066s I0221 08:57:31.530804 208829 settings.go:142] acquiring lock: {Name:mk4400923ef35d7d80e21aa000bc7683aef0fb09 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:57:31.530919 208829 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 08:57:31.532695 208829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig: {Name:mkbaf7b3f4ffbe8a9d57707d423380523fa909e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:57:32.057336 208829 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "auto-20220221084933-6550" rescaled to 1 I0221 08:57:32.057421 208829 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true} I0221 08:57:32.057439 208829 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0221 08:57:32.060752 208829 out.go:176] * Verifying Kubernetes components... I0221 08:57:32.057752 208829 config.go:176] Loaded profile config "auto-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 08:57:32.057772 208829 addons.go:415] enableAddons start: toEnable=map[], additional=[] I0221 08:57:32.060950 208829 addons.go:65] Setting storage-provisioner=true in profile "auto-20220221084933-6550" I0221 08:57:32.060975 208829 addons.go:153] Setting addon storage-provisioner=true in "auto-20220221084933-6550" W0221 08:57:32.060982 208829 addons.go:165] addon storage-provisioner should already be in state true I0221 08:57:32.060820 208829 ssh_runner.go:195] Run: sudo service kubelet status I0221 08:57:32.061027 208829 host.go:66] Checking if "auto-20220221084933-6550" exists ... I0221 08:57:32.061101 208829 addons.go:65] Setting default-storageclass=true in profile "auto-20220221084933-6550" I0221 08:57:32.061124 208829 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-20220221084933-6550" I0221 08:57:32.061419 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:57:32.061567 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:57:29.594682 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:31.595072 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:32.117227 208829 out.go:176] - Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0221 08:57:32.117371 208829 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml I0221 08:57:32.117382 208829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0221 08:57:32.117435 208829 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220221084933-6550 I0221 08:57:32.121052 208829 addons.go:153] Setting addon default-storageclass=true in "auto-20220221084933-6550" W0221 08:57:32.121074 208829 addons.go:165] addon default-storageclass should already be in state true I0221 08:57:32.121097 208829 host.go:66] Checking if "auto-20220221084933-6550" exists ... I0221 08:57:32.121616 208829 cli_runner.go:133] Run: docker container inspect auto-20220221084933-6550 --format={{.State.Status}} I0221 08:57:32.164256 208829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49379 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/auto-20220221084933-6550/id_rsa Username:docker} I0221 08:57:32.164577 208829 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml I0221 08:57:32.164596 208829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0221 08:57:32.164645 208829 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220221084933-6550 I0221 08:57:32.198642 208829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49379 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/auto-20220221084933-6550/id_rsa Username:docker} I0221 08:57:32.232356 208829 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.76.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0221 08:57:32.235124 208829 node_ready.go:35] waiting up to 5m0s for node "auto-20220221084933-6550" to be "Ready" ... I0221 08:57:32.304584 208829 node_ready.go:49] node "auto-20220221084933-6550" has status "Ready":"True" I0221 08:57:32.304610 208829 node_ready.go:38] duration metric: took 69.457998ms waiting for node "auto-20220221084933-6550" to be "Ready" ... I0221 08:57:32.304620 208829 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 08:57:32.317676 208829 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-6wgl9" in "kube-system" namespace to be "Ready" ... I0221 08:57:32.426161 208829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0221 08:57:32.431088 208829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0221 08:57:33.832503 208829 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.76.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.600104705s) I0221 08:57:33.832609 208829 start.go:777] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS I0221 08:57:33.832546 208829 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.406350623s) I0221 08:57:33.903748 208829 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.472601501s) I0221 08:57:33.907151 208829 out.go:176] * Enabled addons: default-storageclass, storage-provisioner I0221 08:57:33.907249 208829 addons.go:417] enableAddons completed in 1.84947813s I0221 08:57:32.085234 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:34.109374 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:36.583295 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:34.093783 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:36.095122 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:34.337160 208829 pod_ready.go:92] pod "coredns-64897985d-6wgl9" in "kube-system" namespace has status "Ready":"True" I0221 08:57:34.337188 208829 pod_ready.go:81] duration metric: took 2.019460111s waiting for pod "coredns-64897985d-6wgl9" in "kube-system" namespace to be "Ready" ... I0221 08:57:34.337200 208829 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-rg6k7" in "kube-system" namespace to be "Ready" ... I0221 08:57:36.348968 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:57:38.848999 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:57:39.105966 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:41.606692 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:38.593566 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:40.593916 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:41.350153 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:57:43.848525 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:57:44.106976 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:46.583983 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:42.594575 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:44.594678 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:46.594775 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:45.850432 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:57:48.348860 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:57:49.084072 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:51.112230 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:49.093600 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:51.093716 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:50.349391 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:57:52.350161 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:57:53.606853 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:55.607543 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:53.594138 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:55.594195 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:54.850341 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:57:57.349412 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:57:58.108377 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:00.608452 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:58.094464 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:00.594174 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:59.349483 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:01.851137 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:03.082697 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:05.107411 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:03.094260 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:05.097983 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:04.348276 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:06.349068 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:08.349553 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:07.583427 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:10.086403 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:07.594946 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:10.095115 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:10.848406 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:13.348644 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:12.582090 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:14.607319 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:12.593715 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:14.594295 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:17.097192 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:15.349934 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:17.850273 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:17.083915 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:19.607890 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:19.593497 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:21.593740 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:20.349272 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:22.851407 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:22.082238 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:24.107976 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:26.608511 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:23.594026 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:26.094324 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:25.348995 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:27.356185 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:29.107566 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:31.108790 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:28.594956 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:31.094580 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:29.850008 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:32.349818 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:33.582823 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:35.586175 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:33.593910 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:35.595299 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:34.848401 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:36.848883 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:38.849110 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:37.607126 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:40.082258 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:38.093960 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:40.094102 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:42.095073 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:40.849629 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:43.349389 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:42.108072 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:44.607510 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:46.608936 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:44.593597 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:46.594499 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:45.849561 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:48.348597 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:48.609972 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:51.082477 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:48.594616 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:50.594840 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:50.348823 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:52.848990 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:53.105968 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:55.582165 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:53.094539 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:55.094604 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:54.849975 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:57.349154 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:58:57.606112 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:59.608167 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:57.593439 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:59.593598 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:01.594070 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:59.349596 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:01.849909 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:02.106572 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:59:04.107313 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:59:06.108123 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:59:04.094375 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:06.593739 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:04.349290 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:06.849034 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:08.849142 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:08.108992 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:59:10.582664 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:59:08.594057 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:10.594906 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:10.849260 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:12.849348 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:12.583673 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:59:14.112706 223679 pod_ready.go:81] duration metric: took 4m0.048450561s waiting for pod "calico-node-zcdj6" in "kube-system" namespace to be "Ready" ... E0221 08:59:14.112734 223679 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition I0221 08:59:14.112746 223679 pod_ready.go:78] waiting up to 5m0s for pod "etcd-calico-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:14.117793 223679 pod_ready.go:92] pod "etcd-calico-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 08:59:14.117820 223679 pod_ready.go:81] duration metric: took 5.066157ms waiting for pod "etcd-calico-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:14.117832 223679 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-calico-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:14.122627 223679 pod_ready.go:92] pod "kube-apiserver-calico-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 08:59:14.122647 223679 pod_ready.go:81] duration metric: took 4.807147ms waiting for pod "kube-apiserver-calico-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:14.122656 223679 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-calico-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:14.127594 223679 pod_ready.go:92] pod "kube-controller-manager-calico-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 08:59:14.127616 223679 pod_ready.go:81] duration metric: took 4.954276ms waiting for pod "kube-controller-manager-calico-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:14.127627 223679 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-kwcvx" in "kube-system" namespace to be "Ready" ... I0221 08:59:14.480801 223679 pod_ready.go:92] pod "kube-proxy-kwcvx" in "kube-system" namespace has status "Ready":"True" I0221 08:59:14.480829 223679 pod_ready.go:81] duration metric: took 353.19554ms waiting for pod "kube-proxy-kwcvx" in "kube-system" namespace to be "Ready" ... I0221 08:59:14.480842 223679 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-calico-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:14.879906 223679 pod_ready.go:92] pod "kube-scheduler-calico-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 08:59:14.879927 223679 pod_ready.go:81] duration metric: took 399.077104ms waiting for pod "kube-scheduler-calico-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:14.879937 223679 pod_ready.go:38] duration metric: took 4m0.837387313s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 08:59:14.879961 223679 api_server.go:51] waiting for apiserver process to appear ... I0221 08:59:14.880012 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0221 08:59:14.942433 223679 logs.go:274] 1 containers: [5b808a7ef4a2] I0221 08:59:14.942510 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0221 08:59:15.037787 223679 logs.go:274] 1 containers: [96cc9489b33e] I0221 08:59:15.037848 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0221 08:59:15.134487 223679 logs.go:274] 0 containers: [] W0221 08:59:15.134520 223679 logs.go:276] No container was found matching "coredns" I0221 08:59:15.134573 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0221 08:59:15.229656 223679 logs.go:274] 1 containers: [f012d1d45e22] I0221 08:59:15.229733 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0221 08:59:15.320906 223679 logs.go:274] 1 containers: [449cc37a92fe] I0221 08:59:15.320985 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0221 08:59:15.417453 223679 logs.go:274] 0 containers: [] W0221 08:59:15.417481 223679 logs.go:276] No container was found matching "kubernetes-dashboard" I0221 08:59:15.417528 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0221 08:59:15.513893 223679 logs.go:274] 2 containers: [528acfa448ce f6cf402c0c9d] I0221 08:59:15.513990 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0221 08:59:15.550415 223679 logs.go:274] 1 containers: [cddc9ef001f2] I0221 08:59:15.550454 223679 logs.go:123] Gathering logs for dmesg ... I0221 08:59:15.550465 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0221 08:59:15.576242 223679 logs.go:123] Gathering logs for etcd [96cc9489b33e] ... I0221 08:59:15.576295 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc9489b33e" I0221 08:59:15.618102 223679 logs.go:123] Gathering logs for kube-proxy [449cc37a92fe] ... I0221 08:59:15.618136 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 449cc37a92fe" I0221 08:59:15.656954 223679 logs.go:123] Gathering logs for storage-provisioner [528acfa448ce] ... I0221 08:59:15.656987 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 528acfa448ce" I0221 08:59:15.722111 223679 logs.go:123] Gathering logs for storage-provisioner [f6cf402c0c9d] ... I0221 08:59:15.722147 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6cf402c0c9d" I0221 08:59:15.808702 223679 logs.go:123] Gathering logs for Docker ... I0221 08:59:15.808737 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0221 08:59:15.889269 223679 logs.go:123] Gathering logs for container status ... I0221 08:59:15.889312 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0221 08:59:15.945538 223679 logs.go:123] Gathering logs for kubelet ... I0221 08:59:15.945571 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0221 08:59:16.147141 223679 logs.go:123] Gathering logs for describe nodes ... I0221 08:59:16.147186 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0221 08:59:16.338070 223679 logs.go:123] Gathering logs for kube-apiserver [5b808a7ef4a2] ... I0221 08:59:16.338111 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b808a7ef4a2" I0221 08:59:16.431605 223679 logs.go:123] Gathering logs for kube-scheduler [f012d1d45e22] ... I0221 08:59:16.431645 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f012d1d45e22" I0221 08:59:16.530228 223679 logs.go:123] Gathering logs for kube-controller-manager [cddc9ef001f2] ... I0221 08:59:16.530264 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cddc9ef001f2" I0221 08:59:12.595167 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:15.094611 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:15.348719 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:17.348992 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:19.103148 223679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0221 08:59:19.129062 223679 api_server.go:71] duration metric: took 4m5.106529752s to wait for apiserver process to appear ... I0221 08:59:19.129100 223679 api_server.go:87] waiting for apiserver healthz status ... I0221 08:59:19.129165 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0221 08:59:19.224393 223679 logs.go:274] 1 containers: [5b808a7ef4a2] I0221 08:59:19.224460 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0221 08:59:19.319828 223679 logs.go:274] 1 containers: [96cc9489b33e] I0221 08:59:19.319900 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0221 08:59:19.418463 223679 logs.go:274] 0 containers: [] W0221 08:59:19.418495 223679 logs.go:276] No container was found matching "coredns" I0221 08:59:19.418541 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0221 08:59:19.516431 223679 logs.go:274] 1 containers: [f012d1d45e22] I0221 08:59:19.516522 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0221 08:59:19.607457 223679 logs.go:274] 1 containers: [449cc37a92fe] I0221 08:59:19.607543 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0221 08:59:19.644308 223679 logs.go:274] 0 containers: [] W0221 08:59:19.644330 223679 logs.go:276] No container was found matching "kubernetes-dashboard" I0221 08:59:19.644368 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0221 08:59:19.677987 223679 logs.go:274] 1 containers: [528acfa448ce] I0221 08:59:19.678065 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0221 08:59:19.711573 223679 logs.go:274] 1 containers: [cddc9ef001f2] I0221 08:59:19.711614 223679 logs.go:123] Gathering logs for dmesg ... I0221 08:59:19.711634 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0221 08:59:19.739316 223679 logs.go:123] Gathering logs for describe nodes ... I0221 08:59:19.739352 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0221 08:59:19.829642 223679 logs.go:123] Gathering logs for etcd [96cc9489b33e] ... I0221 08:59:19.829686 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc9489b33e" I0221 08:59:19.928327 223679 logs.go:123] Gathering logs for kube-scheduler [f012d1d45e22] ... I0221 08:59:19.928367 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f012d1d45e22" I0221 08:59:20.030039 223679 logs.go:123] Gathering logs for storage-provisioner [528acfa448ce] ... I0221 08:59:20.030084 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 528acfa448ce" I0221 08:59:20.115493 223679 logs.go:123] Gathering logs for kubelet ... I0221 08:59:20.115539 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0221 08:59:20.289828 223679 logs.go:123] Gathering logs for kube-proxy [449cc37a92fe] ... I0221 08:59:20.289874 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 449cc37a92fe" I0221 08:59:20.351337 223679 logs.go:123] Gathering logs for kube-controller-manager [cddc9ef001f2] ... I0221 08:59:20.351388 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cddc9ef001f2" I0221 08:59:20.480018 223679 logs.go:123] Gathering logs for Docker ... I0221 08:59:20.480056 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0221 08:59:20.594320 223679 logs.go:123] Gathering logs for container status ... I0221 08:59:20.594358 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0221 08:59:20.641023 223679 logs.go:123] Gathering logs for kube-apiserver [5b808a7ef4a2] ... I0221 08:59:20.641062 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b808a7ef4a2" I0221 08:59:17.594243 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:20.094535 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:22.095445 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:19.849214 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:22.349291 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:23.238237 223679 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ... I0221 08:59:23.244347 223679 api_server.go:266] https://192.168.67.2:8443/healthz returned 200: ok I0221 08:59:23.246494 223679 api_server.go:140] control plane version: v1.23.4 I0221 08:59:23.246519 223679 api_server.go:130] duration metric: took 4.1174116s to wait for apiserver health ... I0221 08:59:23.246529 223679 system_pods.go:43] waiting for kube-system pods to appear ... I0221 08:59:23.246581 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0221 08:59:23.331088 223679 logs.go:274] 1 containers: [5b808a7ef4a2] I0221 08:59:23.331164 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0221 08:59:23.425220 223679 logs.go:274] 1 containers: [96cc9489b33e] I0221 08:59:23.425297 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0221 08:59:23.510198 223679 logs.go:274] 0 containers: [] W0221 08:59:23.510230 223679 logs.go:276] No container was found matching "coredns" I0221 08:59:23.510284 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0221 08:59:23.548794 223679 logs.go:274] 1 containers: [f012d1d45e22] I0221 08:59:23.548859 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0221 08:59:23.642803 223679 logs.go:274] 1 containers: [449cc37a92fe] I0221 08:59:23.642891 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0221 08:59:23.735232 223679 logs.go:274] 0 containers: [] W0221 08:59:23.735263 223679 logs.go:276] No container was found matching "kubernetes-dashboard" I0221 08:59:23.735316 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0221 08:59:23.820175 223679 logs.go:274] 1 containers: [528acfa448ce] I0221 08:59:23.820245 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0221 08:59:23.911162 223679 logs.go:274] 1 containers: [cddc9ef001f2] I0221 08:59:23.911205 223679 logs.go:123] Gathering logs for storage-provisioner [528acfa448ce] ... I0221 08:59:23.911218 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 528acfa448ce" I0221 08:59:24.010277 223679 logs.go:123] Gathering logs for kubelet ... I0221 08:59:24.010307 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0221 08:59:24.188331 223679 logs.go:123] Gathering logs for dmesg ... I0221 08:59:24.188378 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0221 08:59:24.235517 223679 logs.go:123] Gathering logs for describe nodes ... I0221 08:59:24.235564 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0221 08:59:24.433778 223679 logs.go:123] Gathering logs for kube-scheduler [f012d1d45e22] ... I0221 08:59:24.433815 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f012d1d45e22" I0221 08:59:24.542462 223679 logs.go:123] Gathering logs for Docker ... I0221 08:59:24.542562 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0221 08:59:24.683898 223679 logs.go:123] Gathering logs for container status ... I0221 08:59:24.683938 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0221 08:59:24.747804 223679 logs.go:123] Gathering logs for kube-apiserver [5b808a7ef4a2] ... I0221 08:59:24.747846 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b808a7ef4a2" I0221 08:59:24.839623 223679 logs.go:123] Gathering logs for etcd [96cc9489b33e] ... I0221 08:59:24.839664 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc9489b33e" I0221 08:59:24.933214 223679 logs.go:123] Gathering logs for kube-proxy [449cc37a92fe] ... I0221 08:59:24.933249 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 449cc37a92fe" I0221 08:59:24.970081 223679 logs.go:123] Gathering logs for kube-controller-manager [cddc9ef001f2] ... I0221 08:59:24.970115 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cddc9ef001f2" I0221 08:59:24.593641 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:25.099642 227869 pod_ready.go:81] duration metric: took 4m0.023714023s waiting for pod "coredns-64897985d-fw5hd" in "kube-system" namespace to be "Ready" ... E0221 08:59:25.099664 227869 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition I0221 08:59:25.099673 227869 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-kn627" in "kube-system" namespace to be "Ready" ... I0221 08:59:25.101152 227869 pod_ready.go:97] error getting pod "coredns-64897985d-kn627" in "kube-system" namespace (skipping!): pods "coredns-64897985d-kn627" not found I0221 08:59:25.101173 227869 pod_ready.go:81] duration metric: took 1.494584ms waiting for pod "coredns-64897985d-kn627" in "kube-system" namespace to be "Ready" ... E0221 08:59:25.101182 227869 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-64897985d-kn627" in "kube-system" namespace (skipping!): pods "coredns-64897985d-kn627" not found I0221 08:59:25.101190 227869 pod_ready.go:78] waiting up to 5m0s for pod "etcd-custom-weave-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:25.105178 227869 pod_ready.go:92] pod "etcd-custom-weave-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 08:59:25.105196 227869 pod_ready.go:81] duration metric: took 3.99997ms waiting for pod "etcd-custom-weave-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:25.105204 227869 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-custom-weave-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:25.109930 227869 pod_ready.go:92] pod "kube-apiserver-custom-weave-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 08:59:25.109949 227869 pod_ready.go:81] duration metric: took 4.739462ms waiting for pod "kube-apiserver-custom-weave-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:25.109958 227869 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-custom-weave-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:25.292675 227869 pod_ready.go:92] pod "kube-controller-manager-custom-weave-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 08:59:25.292711 227869 pod_ready.go:81] duration metric: took 182.734028ms waiting for pod "kube-controller-manager-custom-weave-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:25.292723 227869 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-q4stn" in "kube-system" namespace to be "Ready" ... I0221 08:59:25.691815 227869 pod_ready.go:92] pod "kube-proxy-q4stn" in "kube-system" namespace has status "Ready":"True" I0221 08:59:25.691839 227869 pod_ready.go:81] duration metric: took 399.108423ms waiting for pod "kube-proxy-q4stn" in "kube-system" namespace to be "Ready" ... I0221 08:59:25.691848 227869 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-custom-weave-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:26.092539 227869 pod_ready.go:92] pod "kube-scheduler-custom-weave-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 08:59:26.092566 227869 pod_ready.go:81] duration metric: took 400.710732ms waiting for pod "kube-scheduler-custom-weave-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:26.092579 227869 pod_ready.go:78] waiting up to 5m0s for pod "weave-net-dgkzh" in "kube-system" namespace to be "Ready" ... I0221 08:59:24.850016 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:27.349859 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:27.559651 223679 system_pods.go:59] 9 kube-system pods found I0221 08:59:27.559689 223679 system_pods.go:61] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:27.559697 223679 system_pods.go:61] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:27.559703 223679 system_pods.go:61] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:27.559708 223679 system_pods.go:61] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:27.559713 223679 system_pods.go:61] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:27.559717 223679 system_pods.go:61] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:27.559722 223679 system_pods.go:61] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:27.559726 223679 system_pods.go:61] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:27.559734 223679 system_pods.go:61] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:27.559742 223679 system_pods.go:74] duration metric: took 4.313209437s to wait for pod list to return data ... I0221 08:59:27.559749 223679 default_sa.go:34] waiting for default service account to be created ... I0221 08:59:27.562671 223679 default_sa.go:45] found service account: "default" I0221 08:59:27.562697 223679 default_sa.go:55] duration metric: took 2.939018ms for default service account to be created ... I0221 08:59:27.562709 223679 system_pods.go:116] waiting for k8s-apps to be running ... I0221 08:59:27.606750 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:27.606791 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:27.606820 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:27.606832 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:27.606849 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:27.606856 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:27.606863 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:27.606870 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:27.606880 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:27.606889 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:27.606913 223679 retry.go:31] will retry after 263.082536ms: missing components: kube-dns I0221 08:59:27.875522 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:27.875558 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:27.875569 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:27.875575 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:27.875581 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:27.875586 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:27.875590 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:27.875593 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:27.875598 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:27.875603 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:27.875619 223679 retry.go:31] will retry after 381.329545ms: missing components: kube-dns I0221 08:59:28.262703 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:28.262737 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:28.262745 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:28.262752 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:28.262757 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:28.262764 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:28.262770 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:28.262776 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:28.262782 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:28.262789 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:28.262812 223679 retry.go:31] will retry after 422.765636ms: missing components: kube-dns I0221 08:59:28.708387 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:28.708425 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:28.708467 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:28.708488 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:28.708506 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:28.708519 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:28.708531 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:28.708537 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:28.708544 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:28.708559 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:28.708575 223679 retry.go:31] will retry after 473.074753ms: missing components: kube-dns I0221 08:59:29.187326 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:29.187359 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:29.187367 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:29.187374 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:29.187379 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:29.187384 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:29.187388 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:29.187392 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:29.187396 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:29.187401 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:29.187414 223679 retry.go:31] will retry after 587.352751ms: missing components: kube-dns I0221 08:59:29.807999 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:29.808041 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:29.808052 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:29.808062 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:29.808069 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:29.808077 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:29.808087 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:29.808093 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:29.808103 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:29.808113 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:29.808133 223679 retry.go:31] will retry after 834.206799ms: missing components: kube-dns I0221 08:59:30.649684 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:30.649731 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:30.649746 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:30.649756 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:30.649766 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:30.649778 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:30.649792 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:30.649806 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:30.649817 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:30.649831 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:30.649852 223679 retry.go:31] will retry after 746.553905ms: missing components: kube-dns I0221 08:59:31.403363 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:31.403414 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:31.403426 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:31.403438 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:31.403446 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:31.403455 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:31.403466 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:31.403474 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:31.403488 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:31.403498 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:31.403522 223679 retry.go:31] will retry after 987.362415ms: missing components: kube-dns I0221 08:59:28.498990 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:30.998871 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:29.848666 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:31.849001 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:32.397015 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:32.397055 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:32.397064 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:32.397075 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:32.397083 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:32.397090 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:32.397103 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:32.397110 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:32.397121 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:32.397132 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:32.397148 223679 retry.go:31] will retry after 1.189835008s: missing components: kube-dns I0221 08:59:33.607429 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:33.607467 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:33.607475 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:33.607484 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:33.607493 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:33.607500 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:33.607507 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:33.607531 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:33.607541 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:33.607550 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:33.607570 223679 retry.go:31] will retry after 1.677229867s: missing components: kube-dns I0221 08:59:35.291721 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:35.291757 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:35.291767 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:35.291776 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:35.291783 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:35.291792 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:35.291798 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:35.291809 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:35.291815 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:35.291826 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:35.291840 223679 retry.go:31] will retry after 2.346016261s: missing components: kube-dns I0221 08:59:33.499218 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:35.998834 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:34.349423 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:36.849024 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:37.644075 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:37.644109 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:37.644117 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:37.644124 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:37.644131 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:37.644136 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:37.644140 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:37.644144 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:37.644147 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:37.644153 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:37.644169 223679 retry.go:31] will retry after 3.36678925s: missing components: kube-dns I0221 08:59:41.020218 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:41.020262 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:41.020274 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:41.020284 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:41.020290 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:41.020296 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:41.020301 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:41.020307 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:41.020324 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:41.020332 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:41.020346 223679 retry.go:31] will retry after 3.11822781s: missing components: kube-dns I0221 08:59:38.498252 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:40.499308 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:39.349078 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:41.848438 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:44.146493 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:44.146526 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:44.146534 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:44.146544 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:44.146552 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:44.146563 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:44.146570 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:44.146582 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:44.146593 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:44.146603 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:44.146623 223679 retry.go:31] will retry after 4.276119362s: missing components: kube-dns I0221 08:59:42.998921 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:45.498291 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:44.348710 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:46.849283 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:48.850157 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:48.430784 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:48.430822 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:48.430855 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:48.430867 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:48.430880 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:48.430889 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:48.430901 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:48.430911 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:48.430921 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:48.430931 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:48.431005 223679 retry.go:31] will retry after 5.167232101s: missing components: kube-dns I0221 08:59:47.498914 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:49.998220 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:51.999087 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:51.349913 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:53.848663 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:53.607863 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:53.607910 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:53.607925 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:53.607936 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:53.607950 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:53.607957 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:53.607965 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:53.607971 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:53.607979 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:53.607991 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:53.608009 223679 retry.go:31] will retry after 6.994901864s: missing components: kube-dns I0221 08:59:53.999129 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:56.497881 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:55.849681 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 08:59:58.348890 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:00.608725 223679 system_pods.go:86] 9 kube-system pods found I0221 09:00:00.608757 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 09:00:00.608767 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 09:00:00.608774 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:00:00.608778 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 09:00:00.608783 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 09:00:00.608788 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 09:00:00.608791 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 09:00:00.608796 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 09:00:00.608801 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:00:00.608818 223679 retry.go:31] will retry after 7.91826225s: missing components: kube-dns I0221 08:59:58.498148 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:00.999242 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:00.349704 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:02.851497 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:03.498525 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:05.999154 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:05.348387 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:07.348675 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:08.534545 223679 system_pods.go:86] 9 kube-system pods found I0221 09:00:08.534589 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 09:00:08.534602 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 09:00:08.534613 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:00:08.534621 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 09:00:08.534630 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 09:00:08.534642 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 09:00:08.534654 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 09:00:08.534665 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 09:00:08.534678 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:00:08.534700 223679 retry.go:31] will retry after 9.953714808s: missing components: kube-dns I0221 09:00:08.498881 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:10.998464 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:09.349729 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:11.848467 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:13.848882 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:12.998682 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:14.999363 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:16.350910 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:18.848692 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:18.494832 223679 system_pods.go:86] 9 kube-system pods found I0221 09:00:18.494873 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 09:00:18.494884 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 09:00:18.494893 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:00:18.494898 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 09:00:18.494903 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 09:00:18.494909 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 09:00:18.494918 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 09:00:18.494925 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 09:00:18.494935 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:00:18.494956 223679 retry.go:31] will retry after 15.120437328s: missing components: kube-dns I0221 09:00:17.498767 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:19.499481 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:21.998971 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:20.849344 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:23.349381 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:24.499960 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:26.999269 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:25.849056 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:27.849318 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:29.499198 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:31.998892 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:30.349828 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:32.848757 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:33.622907 223679 system_pods.go:86] 9 kube-system pods found I0221 09:00:33.622950 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 09:00:33.622961 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 09:00:33.622970 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:00:33.622977 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 09:00:33.622983 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 09:00:33.622989 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 09:00:33.623036 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 09:00:33.623050 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 09:00:33.623058 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:00:33.623079 223679 retry.go:31] will retry after 14.90607158s: missing components: kube-dns I0221 09:00:33.999959 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:36.498439 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:34.848956 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:36.849066 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:38.849119 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:38.998551 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:40.998664 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:41.348585 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:43.349457 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:42.999010 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:45.498414 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:45.850967 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:48.349610 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:48.536869 223679 system_pods.go:86] 9 kube-system pods found I0221 09:00:48.536919 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 09:00:48.536931 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 09:00:48.536941 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:00:48.536949 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 09:00:48.536955 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 09:00:48.536959 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 09:00:48.536964 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 09:00:48.536968 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 09:00:48.536982 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running I0221 09:00:48.536998 223679 retry.go:31] will retry after 18.465989061s: missing components: kube-dns I0221 09:00:47.498620 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:49.998601 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:51.999470 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:50.849439 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:53.348792 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:54.499043 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:56.499562 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:55.348932 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:57.847995 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:00:58.998197 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:00.998372 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:59.848674 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:01:01.849363 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:01:02.999674 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:05.499244 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:04.348871 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:01:06.349795 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:01:08.849206 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:01:07.010825 223679 system_pods.go:86] 9 kube-system pods found I0221 09:01:07.010865 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 09:01:07.010877 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 09:01:07.010887 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:01:07.010895 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 09:01:07.010902 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 09:01:07.010908 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 09:01:07.010925 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 09:01:07.010931 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 09:01:07.010939 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running I0221 09:01:07.010960 223679 retry.go:31] will retry after 25.219510332s: missing components: kube-dns I0221 09:01:07.998930 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:10.499101 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:11.349117 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:01:13.848278 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:01:12.499436 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:14.998244 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:16.998957 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:15.849578 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:01:18.348555 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:01:19.499569 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:21.503811 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:20.349090 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:01:22.848149 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:01:23.998532 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:26.001410 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:25.348797 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:01:27.349734 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:01:28.497652 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:30.497882 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:29.848914 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:01:31.849118 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:01:33.850062 208829 pod_ready.go:102] pod "coredns-64897985d-rg6k7" in "kube-system" namespace has status "Ready":"False" I0221 09:01:32.236004 223679 system_pods.go:86] 9 kube-system pods found I0221 09:01:32.236044 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 09:01:32.236056 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 09:01:32.236064 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:01:32.236072 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 09:01:32.236078 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 09:01:32.236084 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 09:01:32.236091 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 09:01:32.236097 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 09:01:32.236107 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:01:32.236125 223679 retry.go:31] will retry after 35.078569648s: missing components: kube-dns I0221 09:01:32.498505 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:34.499389 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:36.998781 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:34.352622 208829 pod_ready.go:81] duration metric: took 4m0.01541005s waiting for pod "coredns-64897985d-rg6k7" in "kube-system" namespace to be "Ready" ... E0221 09:01:34.352645 208829 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition I0221 09:01:34.352653 208829 pod_ready.go:78] waiting up to 5m0s for pod "etcd-auto-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:01:34.356337 208829 pod_ready.go:92] pod "etcd-auto-20220221084933-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:01:34.356357 208829 pod_ready.go:81] duration metric: took 3.698768ms waiting for pod "etcd-auto-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:01:34.356365 208829 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-auto-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:01:34.360068 208829 pod_ready.go:92] pod "kube-apiserver-auto-20220221084933-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:01:34.360086 208829 pod_ready.go:81] duration metric: took 3.71506ms waiting for pod "kube-apiserver-auto-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:01:34.360094 208829 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-auto-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:01:34.363833 208829 pod_ready.go:92] pod "kube-controller-manager-auto-20220221084933-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:01:34.363854 208829 pod_ready.go:81] duration metric: took 3.753995ms waiting for pod "kube-controller-manager-auto-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:01:34.363864 208829 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-j6t4r" in "kube-system" namespace to be "Ready" ... I0221 09:01:34.747517 208829 pod_ready.go:92] pod "kube-proxy-j6t4r" in "kube-system" namespace has status "Ready":"True" I0221 09:01:34.747544 208829 pod_ready.go:81] duration metric: took 383.671848ms waiting for pod "kube-proxy-j6t4r" in "kube-system" namespace to be "Ready" ... I0221 09:01:34.747559 208829 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-auto-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:01:35.147507 208829 pod_ready.go:92] pod "kube-scheduler-auto-20220221084933-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:01:35.147532 208829 pod_ready.go:81] duration metric: took 399.96592ms waiting for pod "kube-scheduler-auto-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:01:35.147543 208829 pod_ready.go:38] duration metric: took 4m2.842909165s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 09:01:35.147607 208829 api_server.go:51] waiting for apiserver process to appear ... I0221 09:01:35.147666 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0221 09:01:35.184048 208829 logs.go:274] 1 containers: [ee44803ab83a] I0221 09:01:35.184116 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0221 09:01:35.222133 208829 logs.go:274] 1 containers: [b23ee2bbc19d] I0221 09:01:35.222212 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0221 09:01:35.257673 208829 logs.go:274] 1 containers: [9ec110d5717f] I0221 09:01:35.257742 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0221 09:01:35.291205 208829 logs.go:274] 1 containers: [c78588822ac6] I0221 09:01:35.291278 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0221 09:01:35.324945 208829 logs.go:274] 1 containers: [76924ebff838] I0221 09:01:35.325015 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0221 09:01:35.359782 208829 logs.go:274] 0 containers: [] W0221 09:01:35.359804 208829 logs.go:276] No container was found matching "kubernetes-dashboard" I0221 09:01:35.359842 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0221 09:01:35.393096 208829 logs.go:274] 1 containers: [1cd0b722c1ad] I0221 09:01:35.393159 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0221 09:01:35.428486 208829 logs.go:274] 1 containers: [0bb1b94ca5a9] I0221 09:01:35.428556 208829 logs.go:123] Gathering logs for coredns [9ec110d5717f] ... I0221 09:01:35.428576 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ec110d5717f" I0221 09:01:35.465039 208829 logs.go:123] Gathering logs for kube-scheduler [c78588822ac6] ... I0221 09:01:35.465067 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78588822ac6" I0221 09:01:35.508531 208829 logs.go:123] Gathering logs for kube-proxy [76924ebff838] ... I0221 09:01:35.508563 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76924ebff838" I0221 09:01:35.544059 208829 logs.go:123] Gathering logs for kube-apiserver [ee44803ab83a] ... I0221 09:01:35.544087 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee44803ab83a" I0221 09:01:35.589418 208829 logs.go:123] Gathering logs for etcd [b23ee2bbc19d] ... I0221 09:01:35.589467 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b23ee2bbc19d" I0221 09:01:35.638600 208829 logs.go:123] Gathering logs for describe nodes ... I0221 09:01:35.638638 208829 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0221 09:01:35.726202 208829 logs.go:123] Gathering logs for storage-provisioner [1cd0b722c1ad] ... I0221 09:01:35.726239 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cd0b722c1ad" I0221 09:01:35.768986 208829 logs.go:123] Gathering logs for kube-controller-manager [0bb1b94ca5a9] ... I0221 09:01:35.769016 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bb1b94ca5a9" I0221 09:01:35.823631 208829 logs.go:123] Gathering logs for Docker ... I0221 09:01:35.823668 208829 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0221 09:01:35.842264 208829 logs.go:123] Gathering logs for container status ... I0221 09:01:35.842298 208829 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0221 09:01:35.879259 208829 logs.go:123] Gathering logs for kubelet ... I0221 09:01:35.879298 208829 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0221 09:01:35.948001 208829 logs.go:123] Gathering logs for dmesg ... I0221 09:01:35.948047 208829 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0221 09:01:38.481944 208829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0221 09:01:38.504799 208829 api_server.go:71] duration metric: took 4m6.447340023s to wait for apiserver process to appear ... I0221 09:01:38.504830 208829 api_server.go:87] waiting for apiserver healthz status ... I0221 09:01:38.504879 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0221 09:01:38.537954 208829 logs.go:274] 1 containers: [ee44803ab83a] I0221 09:01:38.538037 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0221 09:01:38.571333 208829 logs.go:274] 1 containers: [b23ee2bbc19d] I0221 09:01:38.571405 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0221 09:01:38.604685 208829 logs.go:274] 1 containers: [9ec110d5717f] I0221 09:01:38.604755 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0221 09:01:38.638264 208829 logs.go:274] 1 containers: [c78588822ac6] I0221 09:01:38.638348 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0221 09:01:38.673235 208829 logs.go:274] 1 containers: [76924ebff838] I0221 09:01:38.673305 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0221 09:01:38.706125 208829 logs.go:274] 0 containers: [] W0221 09:01:38.706156 208829 logs.go:276] No container was found matching "kubernetes-dashboard" I0221 09:01:38.706205 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0221 09:01:38.739965 208829 logs.go:274] 1 containers: [1cd0b722c1ad] I0221 09:01:38.740043 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0221 09:01:38.773046 208829 logs.go:274] 1 containers: [0bb1b94ca5a9] I0221 09:01:38.773090 208829 logs.go:123] Gathering logs for dmesg ... I0221 09:01:38.773105 208829 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0221 09:01:38.807138 208829 logs.go:123] Gathering logs for coredns [9ec110d5717f] ... I0221 09:01:38.807175 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ec110d5717f" I0221 09:01:38.850852 208829 logs.go:123] Gathering logs for kubelet ... I0221 09:01:38.850885 208829 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0221 09:01:38.915403 208829 logs.go:123] Gathering logs for describe nodes ... I0221 09:01:38.915466 208829 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0221 09:01:39.005837 208829 logs.go:123] Gathering logs for kube-apiserver [ee44803ab83a] ... I0221 09:01:39.005870 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee44803ab83a" I0221 09:01:39.059582 208829 logs.go:123] Gathering logs for etcd [b23ee2bbc19d] ... I0221 09:01:39.059627 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b23ee2bbc19d" I0221 09:01:39.106453 208829 logs.go:123] Gathering logs for kube-scheduler [c78588822ac6] ... I0221 09:01:39.106489 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78588822ac6" I0221 09:01:39.159258 208829 logs.go:123] Gathering logs for kube-proxy [76924ebff838] ... I0221 09:01:39.159304 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76924ebff838" I0221 09:01:39.195407 208829 logs.go:123] Gathering logs for storage-provisioner [1cd0b722c1ad] ... I0221 09:01:39.195434 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cd0b722c1ad" I0221 09:01:39.497987 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:41.999075 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:39.237051 208829 logs.go:123] Gathering logs for kube-controller-manager [0bb1b94ca5a9] ... I0221 09:01:39.237078 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bb1b94ca5a9" I0221 09:01:39.289740 208829 logs.go:123] Gathering logs for Docker ... I0221 09:01:39.289779 208829 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0221 09:01:39.309121 208829 logs.go:123] Gathering logs for container status ... I0221 09:01:39.309166 208829 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0221 09:01:41.850261 208829 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ... I0221 09:01:41.855142 208829 api_server.go:266] https://192.168.76.2:8443/healthz returned 200: ok I0221 09:01:41.856107 208829 api_server.go:140] control plane version: v1.23.4 I0221 09:01:41.856131 208829 api_server.go:130] duration metric: took 3.351295129s to wait for apiserver health ... I0221 09:01:41.856140 208829 system_pods.go:43] waiting for kube-system pods to appear ... I0221 09:01:41.856194 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0221 09:01:41.890006 208829 logs.go:274] 1 containers: [ee44803ab83a] I0221 09:01:41.890088 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0221 09:01:41.923013 208829 logs.go:274] 1 containers: [b23ee2bbc19d] I0221 09:01:41.923093 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0221 09:01:41.958916 208829 logs.go:274] 1 containers: [9ec110d5717f] I0221 09:01:41.958990 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0221 09:01:41.994619 208829 logs.go:274] 1 containers: [c78588822ac6] I0221 09:01:41.994705 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0221 09:01:42.037650 208829 logs.go:274] 1 containers: [76924ebff838] I0221 09:01:42.037726 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0221 09:01:42.075743 208829 logs.go:274] 0 containers: [] W0221 09:01:42.075768 208829 logs.go:276] No container was found matching "kubernetes-dashboard" I0221 09:01:42.075820 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0221 09:01:42.118071 208829 logs.go:274] 1 containers: [1cd0b722c1ad] I0221 09:01:42.118163 208829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0221 09:01:42.159633 208829 logs.go:274] 1 containers: [0bb1b94ca5a9] I0221 09:01:42.159684 208829 logs.go:123] Gathering logs for describe nodes ... I0221 09:01:42.159700 208829 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0221 09:01:42.252178 208829 logs.go:123] Gathering logs for kube-apiserver [ee44803ab83a] ... I0221 09:01:42.252212 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee44803ab83a" I0221 09:01:42.298061 208829 logs.go:123] Gathering logs for etcd [b23ee2bbc19d] ... I0221 09:01:42.298092 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b23ee2bbc19d" I0221 09:01:42.348980 208829 logs.go:123] Gathering logs for kube-scheduler [c78588822ac6] ... I0221 09:01:42.349015 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78588822ac6" I0221 09:01:42.394629 208829 logs.go:123] Gathering logs for kube-proxy [76924ebff838] ... I0221 09:01:42.394665 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76924ebff838" I0221 09:01:42.435725 208829 logs.go:123] Gathering logs for storage-provisioner [1cd0b722c1ad] ... I0221 09:01:42.435765 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cd0b722c1ad" I0221 09:01:42.475586 208829 logs.go:123] Gathering logs for kubelet ... I0221 09:01:42.475618 208829 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0221 09:01:42.539602 208829 logs.go:123] Gathering logs for coredns [9ec110d5717f] ... I0221 09:01:42.539644 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ec110d5717f" I0221 09:01:42.586017 208829 logs.go:123] Gathering logs for kube-controller-manager [0bb1b94ca5a9] ... I0221 09:01:42.586047 208829 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bb1b94ca5a9" I0221 09:01:42.639424 208829 logs.go:123] Gathering logs for Docker ... I0221 09:01:42.639458 208829 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0221 09:01:42.658416 208829 logs.go:123] Gathering logs for container status ... I0221 09:01:42.658457 208829 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0221 09:01:42.691083 208829 logs.go:123] Gathering logs for dmesg ... I0221 09:01:42.691113 208829 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0221 09:01:45.232111 208829 system_pods.go:59] 7 kube-system pods found I0221 09:01:45.232150 208829 system_pods.go:61] "coredns-64897985d-rg6k7" [b5b504ee-2e2d-4f88-84b8-ce018dbb6549] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:01:45.232157 208829 system_pods.go:61] "etcd-auto-20220221084933-6550" [c25df683-b01e-4f2a-8e47-7a1409996649] Running I0221 09:01:45.232162 208829 system_pods.go:61] "kube-apiserver-auto-20220221084933-6550" [ae612da3-338a-40de-98fd-f627bf47483f] Running I0221 09:01:45.232166 208829 system_pods.go:61] "kube-controller-manager-auto-20220221084933-6550" [cf06723d-2296-4a7f-a9fc-f5c629f0c7aa] Running I0221 09:01:45.232171 208829 system_pods.go:61] "kube-proxy-j6t4r" [eb672423-9289-4e70-93e6-75fa71e1c263] Running I0221 09:01:45.232175 208829 system_pods.go:61] "kube-scheduler-auto-20220221084933-6550" [7a81e5fe-13d9-4994-9a6d-e0da219b2414] Running I0221 09:01:45.232183 208829 system_pods.go:61] "storage-provisioner" [cb2b449c-788d-4efb-9f51-1de24e609c8b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:01:45.232191 208829 system_pods.go:74] duration metric: took 3.376043984s to wait for pod list to return data ... I0221 09:01:45.232206 208829 default_sa.go:34] waiting for default service account to be created ... I0221 09:01:45.234739 208829 default_sa.go:45] found service account: "default" I0221 09:01:45.234761 208829 default_sa.go:55] duration metric: took 2.545741ms for default service account to be created ... I0221 09:01:45.234768 208829 system_pods.go:116] waiting for k8s-apps to be running ... I0221 09:01:45.238869 208829 system_pods.go:86] 7 kube-system pods found I0221 09:01:45.238897 208829 system_pods.go:89] "coredns-64897985d-rg6k7" [b5b504ee-2e2d-4f88-84b8-ce018dbb6549] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:01:45.238903 208829 system_pods.go:89] "etcd-auto-20220221084933-6550" [c25df683-b01e-4f2a-8e47-7a1409996649] Running I0221 09:01:45.238908 208829 system_pods.go:89] "kube-apiserver-auto-20220221084933-6550" [ae612da3-338a-40de-98fd-f627bf47483f] Running I0221 09:01:45.238912 208829 system_pods.go:89] "kube-controller-manager-auto-20220221084933-6550" [cf06723d-2296-4a7f-a9fc-f5c629f0c7aa] Running I0221 09:01:45.238916 208829 system_pods.go:89] "kube-proxy-j6t4r" [eb672423-9289-4e70-93e6-75fa71e1c263] Running I0221 09:01:45.238920 208829 system_pods.go:89] "kube-scheduler-auto-20220221084933-6550" [7a81e5fe-13d9-4994-9a6d-e0da219b2414] Running I0221 09:01:45.238950 208829 system_pods.go:89] "storage-provisioner" [cb2b449c-788d-4efb-9f51-1de24e609c8b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:01:45.238960 208829 system_pods.go:126] duration metric: took 4.188514ms to wait for k8s-apps to be running ... I0221 09:01:45.238966 208829 system_svc.go:44] waiting for kubelet service to be running .... I0221 09:01:45.239044 208829 ssh_runner.go:195] Run: sudo service kubelet status I0221 09:01:45.258078 208829 system_svc.go:56] duration metric: took 19.104209ms WaitForService to wait for kubelet. I0221 09:01:45.258115 208829 kubeadm.go:548] duration metric: took 4m13.200661633s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ... I0221 09:01:45.258142 208829 node_conditions.go:102] verifying NodePressure condition ... I0221 09:01:45.263650 208829 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki I0221 09:01:45.263679 208829 node_conditions.go:123] node cpu capacity is 8 I0221 09:01:45.263695 208829 node_conditions.go:105] duration metric: took 5.547069ms to run NodePressure ... I0221 09:01:45.263705 208829 start.go:213] waiting for startup goroutines ... I0221 09:01:45.306637 208829 start.go:496] kubectl: 1.23.4, cluster: 1.23.4 (minor skew: 0) I0221 09:01:45.310494 208829 out.go:176] * Done! kubectl is now configured to use "auto-20220221084933-6550" cluster and "default" namespace by default I0221 09:01:43.999131 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:45.999453 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:48.498612 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:50.502349 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:53.000328 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:55.498350 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:57.498897 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:59.998589 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:02.498112 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:04.499166 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:06.499366 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:07.320903 223679 system_pods.go:86] 9 kube-system pods found I0221 09:02:07.320944 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 09:02:07.320955 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 09:02:07.320961 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:02:07.320967 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 09:02:07.320973 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 09:02:07.320977 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 09:02:07.320981 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 09:02:07.320985 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 09:02:07.320990 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:02:07.321002 223679 retry.go:31] will retry after 50.027701973s: missing components: kube-dns I0221 09:02:08.998138 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:10.998798 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:12.998867 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:14.999708 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:17.499134 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:19.998038 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:21.999415 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:24.503262 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:26.998872 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:28.999023 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:31.498312 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:33.498493 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:35.999270 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" * * ==> Docker <== * -- Logs begin at Mon 2022-02-21 08:55:41 UTC, end at Mon 2022-02-21 09:02:41 UTC. -- Feb 21 08:55:43 false-20220221084934-6550 systemd[1]: Stopped Docker Application Container Engine. Feb 21 08:55:43 false-20220221084934-6550 systemd[1]: Starting Docker Application Container Engine... Feb 21 08:55:43 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:55:43.641972110Z" level=info msg="Starting up" Feb 21 08:55:43 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:55:43.644349551Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 21 08:55:43 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:55:43.644393261Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 21 08:55:43 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:55:43.644433556Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Feb 21 08:55:43 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:55:43.644451679Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 21 08:55:43 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:55:43.645804236Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 21 08:55:43 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:55:43.645939514Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 21 08:55:43 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:55:43.645973818Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Feb 21 08:55:43 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:55:43.645984191Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 21 08:55:43 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:55:43.652317916Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Feb 21 08:55:43 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:55:43.658136600Z" level=warning msg="Your kernel does not support CPU realtime scheduler" Feb 21 08:55:43 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:55:43.658167911Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 21 08:55:43 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:55:43.658175093Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 21 08:55:43 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:55:43.658365740Z" level=info msg="Loading containers: start." Feb 21 08:55:43 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:55:43.770668825Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 21 08:55:43 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:55:43.814491493Z" level=info msg="Loading containers: done." Feb 21 08:55:43 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:55:43.832796730Z" level=info msg="Docker daemon" commit=459d0df graphdriver(s)=overlay2 version=20.10.12 Feb 21 08:55:43 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:55:43.832893339Z" level=info msg="Daemon has completed initialization" Feb 21 08:55:43 false-20220221084934-6550 systemd[1]: Started Docker Application Container Engine. Feb 21 08:55:43 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:55:43.854247043Z" level=info msg="API listen on [::]:2376" Feb 21 08:55:43 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:55:43.857933661Z" level=info msg="API listen on /var/run/docker.sock" Feb 21 08:56:17 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:56:17.743295566Z" level=info msg="ignoring event" container=f94f7f392e736bd87536f0490e3bf34465601e46c8d0581bbbe87d7fd75543c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 08:56:17 false-20220221084934-6550 dockerd[457]: time="2022-02-21T08:56:17.794346836Z" level=info msg="ignoring event" container=e6c7cf2ddcf6c41555cce331d7cd9cd5d0c46cf25daa9b590b194449b67d31c8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID ea86a1d35b73f k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 6 minutes ago Running dnsutils 0 9d884bdb5ec49 5ea2efd751380 6e38f40d628db 6 minutes ago Running storage-provisioner 0 3329154f839c3 d912cc0c981d4 a4ca41631cc7a 6 minutes ago Running coredns 0 654d30a3d4079 8a0c30ea7fd7c 2114245ec4d6b 6 minutes ago Running kube-proxy 0 1ebf20a1a27fc d7932880a27cd aceacb6244f9f 6 minutes ago Running kube-scheduler 0 30704c112d028 2187c92e487ba 25f8c7f3da61c 6 minutes ago Running etcd 0 6bf3b50bb5eb4 9457fb7075229 62930710c9634 6 minutes ago Running kube-apiserver 0 e6986cb941737 a7e7eaacf8427 25444908517a5 6 minutes ago Running kube-controller-manager 0 4fda3c01a2916 * * ==> coredns [d912cc0c981d] <== * [INFO] plugin/ready: Still waiting on: "kubernetes" .:53 [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7 CoreDNS-1.8.6 linux/amd64, go1.17.1, 13a9191 [INFO] Reloading [INFO] plugin/health: Going into lameduck mode for 5s [INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031 [INFO] Reloading complete * * ==> describe nodes <== * Name: false-20220221084934-6550 Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=false-20220221084934-6550 kubernetes.io/os=linux minikube.k8s.io/commit=9a4c39b14752448ee8b5da7f8bb397c2a16d9ea9 minikube.k8s.io/name=false-20220221084934-6550 minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2022_02_21T08_55_57_0700 minikube.k8s.io/version=v1.25.1 node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 21 Feb 2022 08:55:54 +0000 Taints: Unschedulable: false Lease: HolderIdentity: false-20220221084934-6550 AcquireTime: RenewTime: Mon, 21 Feb 2022 09:02:35 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Mon, 21 Feb 2022 09:01:35 +0000 Mon, 21 Feb 2022 08:55:51 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Mon, 21 Feb 2022 09:01:35 +0000 Mon, 21 Feb 2022 08:55:51 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Mon, 21 Feb 2022 09:01:35 +0000 Mon, 21 Feb 2022 08:55:51 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Mon, 21 Feb 2022 09:01:35 +0000 Mon, 21 Feb 2022 08:56:08 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.49.2 Hostname: false-20220221084934-6550 Capacity: cpu: 8 ephemeral-storage: 304695084Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32874648Ki pods: 110 Allocatable: cpu: 8 ephemeral-storage: 304695084Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32874648Ki pods: 110 System Info: Machine ID: b6a262faae404a5db719705fd34b5c8b System UUID: 082ec138-1616-4f1c-85e0-734b853b620f Boot ID: 36f9c729-2a96-4807-bb74-314dc2113999 Kernel Version: 5.11.0-1029-gcp OS Image: Ubuntu 20.04.2 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.12 Kubelet Version: v1.23.4 Kube-Proxy Version: v1.23.4 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (8 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- default netcat-668db85669-gl7hj 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m26s kube-system coredns-64897985d-9k8b6 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 6m31s kube-system etcd-false-20220221084934-6550 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 6m44s kube-system kube-apiserver-false-20220221084934-6550 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m44s kube-system kube-controller-manager-false-20220221084934-6550 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m45s kube-system kube-proxy-mlfhq 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m32s kube-system kube-scheduler-false-20220221084934-6550 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m44s kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m29s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 750m (9%!)(MISSING) 0 (0%!)(MISSING) memory 170Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 6m30s kube-proxy Normal NodeHasSufficientMemory 6m52s (x4 over 6m53s) kubelet Node false-20220221084934-6550 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 6m52s (x3 over 6m53s) kubelet Node false-20220221084934-6550 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 6m52s (x3 over 6m53s) kubelet Node false-20220221084934-6550 status is now: NodeHasSufficientPID Normal Starting 6m45s kubelet Starting kubelet. Normal NodeHasNoDiskPressure 6m45s kubelet Node false-20220221084934-6550 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 6m45s kubelet Node false-20220221084934-6550 status is now: NodeHasSufficientPID Normal NodeHasSufficientMemory 6m45s kubelet Node false-20220221084934-6550 status is now: NodeHasSufficientMemory Normal NodeNotReady 6m44s kubelet Node false-20220221084934-6550 status is now: NodeNotReady Normal NodeAllocatableEnforced 6m44s kubelet Updated Node Allocatable limit across pods Normal NodeReady 6m34s kubelet Node false-20220221084934-6550 status is now: NodeReady * * ==> dmesg <== * [ +0.000009] ll header: 00000000: ff ff ff ff ff ff d2 bf dd f8 dd 25 08 06 [ +3.033891] IPv4: martian source 10.85.0.141 from 10.85.0.141, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 de 02 bf 6b fe 08 06 [ +3.108367] IPv4: martian source 10.85.0.142 from 10.85.0.142, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 bd 92 b1 df 50 08 06 [ +3.036056] IPv4: martian source 10.85.0.143 from 10.85.0.143, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 1f 60 a8 09 4e 08 06 [ +2.954252] IPv4: martian source 10.85.0.144 from 10.85.0.144, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff fa 43 39 ae e2 13 08 06 [ +3.203300] IPv4: martian source 10.85.0.145 from 10.85.0.145, on dev eth0 [ +0.000005] ll header: 00000000: ff ff ff ff ff ff 16 3e 0e a7 c7 cc 08 06 [ +2.484933] IPv4: martian source 10.85.0.146 from 10.85.0.146, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 67 74 76 d8 af 08 06 [ +2.531504] IPv4: martian source 10.85.0.147 from 10.85.0.147, on dev eth0 [ +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e cc b7 0a 27 7e 08 06 [ +3.156388] IPv4: martian source 10.85.0.148 from 10.85.0.148, on dev eth0 [ +0.000008] ll header: 00000000: ff ff ff ff ff ff f6 3a c4 2f a5 8f 08 06 [ +2.783142] IPv4: martian source 10.85.0.149 from 10.85.0.149, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 2e 75 c1 1f e5 08 06 [ +3.065560] IPv4: martian source 10.85.0.150 from 10.85.0.150, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 88 c1 5c 06 a5 08 06 [ +3.173096] IPv4: martian source 10.85.0.151 from 10.85.0.151, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e b2 3f 08 7a 3c 08 06 [ +2.513515] IPv4: martian source 10.85.0.152 from 10.85.0.152, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e bc d4 ae 6d 61 08 06 * * ==> etcd [2187c92e487b] <== * {"level":"info","ts":"2022-02-21T08:55:51.131Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2022-02-21T08:55:51.131Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"} {"level":"info","ts":"2022-02-21T08:55:51.132Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"} {"level":"info","ts":"2022-02-21T08:55:51.132Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]} {"level":"info","ts":"2022-02-21T08:55:51.132Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"} {"level":"info","ts":"2022-02-21T08:55:52.019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"} {"level":"info","ts":"2022-02-21T08:55:52.019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"} {"level":"info","ts":"2022-02-21T08:55:52.019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"} {"level":"info","ts":"2022-02-21T08:55:52.019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"} {"level":"info","ts":"2022-02-21T08:55:52.019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"} {"level":"info","ts":"2022-02-21T08:55:52.019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"} {"level":"info","ts":"2022-02-21T08:55:52.019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"} {"level":"info","ts":"2022-02-21T08:55:52.019Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T08:55:52.019Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:false-20220221084934-6550 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"} {"level":"info","ts":"2022-02-21T08:55:52.019Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2022-02-21T08:55:52.019Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2022-02-21T08:55:52.019Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"} {"level":"info","ts":"2022-02-21T08:55:52.020Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"} {"level":"info","ts":"2022-02-21T08:55:52.020Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T08:55:52.020Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T08:55:52.020Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T08:55:52.021Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"} {"level":"info","ts":"2022-02-21T08:55:52.021Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"} {"level":"warn","ts":"2022-02-21T08:56:58.930Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"198.564035ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-02-21T08:56:58.931Z","caller":"traceutil/trace.go:171","msg":"trace[1439317579] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:550; }","duration":"198.721248ms","start":"2022-02-21T08:56:58.732Z","end":"2022-02-21T08:56:58.930Z","steps":["trace[1439317579] 'range keys from in-memory index tree' (duration: 198.422442ms)"],"step_count":1} * * ==> kernel <== * 09:02:42 up 45 min, 0 users, load average: 4.35, 4.42, 3.54 Linux false-20220221084934-6550 5.11.0-1029-gcp #33~20.04.3-Ubuntu SMP Tue Jan 18 12:03:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.2 LTS" * * ==> kube-apiserver [9457fb707522] <== * I0221 08:55:54.765251 1 apf_controller.go:322] Running API Priority and Fairness config worker I0221 08:55:54.765269 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0221 08:55:54.765256 1 cache.go:39] Caches are synced for autoregister controller I0221 08:55:54.768869 1 cache.go:39] Caches are synced for AvailableConditionController controller I0221 08:55:54.802059 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller I0221 08:55:54.802111 1 shared_informer.go:247] Caches are synced for crd-autoregister I0221 08:55:55.665387 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0221 08:55:55.665422 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0221 08:55:55.681704 1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000 I0221 08:55:55.685033 1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000 I0221 08:55:55.685056 1 storage_scheduling.go:109] all system priority classes are created successfully or already exist. I0221 08:55:56.157792 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0221 08:55:56.193703 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0221 08:55:56.330378 1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1] W0221 08:55:56.335482 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2] I0221 08:55:56.336420 1 controller.go:611] quota admission added evaluator for: endpoints I0221 08:55:56.340066 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io I0221 08:55:56.827742 1 controller.go:611] quota admission added evaluator for: serviceaccounts I0221 08:55:57.513454 1 controller.go:611] quota admission added evaluator for: deployments.apps I0221 08:55:57.521139 1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10] I0221 08:55:57.531819 1 controller.go:611] quota admission added evaluator for: daemonsets.apps I0221 08:56:10.881594 1 controller.go:611] quota admission added evaluator for: replicasets.apps I0221 08:56:10.930759 1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps I0221 08:56:12.024488 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io I0221 08:56:16.361946 1 alloc.go:329] "allocated clusterIPs" service="default/netcat" clusterIPs=map[IPv4:10.106.71.151] * * ==> kube-controller-manager [a7e7eaacf842] <== * I0221 08:56:10.276301 1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: I0221 08:56:10.276312 1 shared_informer.go:247] Caches are synced for endpoint_slice W0221 08:56:10.276375 1 node_lifecycle_controller.go:1012] Missing timestamp for Node false-20220221084934-6550. Assuming now as a timestamp. I0221 08:56:10.276390 1 taint_manager.go:187] "Starting NoExecuteTaintManager" I0221 08:56:10.276405 1 shared_informer.go:247] Caches are synced for stateful set I0221 08:56:10.276458 1 event.go:294] "Event occurred" object="false-20220221084934-6550" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node false-20220221084934-6550 event: Registered Node false-20220221084934-6550 in Controller" I0221 08:56:10.276466 1 node_lifecycle_controller.go:1213] Controller detected that zone is now in state Normal. I0221 08:56:10.286454 1 shared_informer.go:247] Caches are synced for resource quota I0221 08:56:10.294638 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring I0221 08:56:10.311994 1 shared_informer.go:247] Caches are synced for endpoint I0221 08:56:10.328263 1 shared_informer.go:247] Caches are synced for resource quota I0221 08:56:10.376678 1 shared_informer.go:247] Caches are synced for cronjob I0221 08:56:10.376713 1 shared_informer.go:247] Caches are synced for job I0221 08:56:10.376689 1 shared_informer.go:247] Caches are synced for TTL after finished I0221 08:56:10.747622 1 shared_informer.go:247] Caches are synced for garbage collector I0221 08:56:10.776595 1 shared_informer.go:247] Caches are synced for garbage collector I0221 08:56:10.776620 1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0221 08:56:10.885251 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2" I0221 08:56:10.936391 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mlfhq" I0221 08:56:11.012442 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1" I0221 08:56:11.132460 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-snkv2" I0221 08:56:11.135927 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-9k8b6" I0221 08:56:11.153674 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-snkv2" I0221 08:56:16.364324 1 event.go:294] "Event occurred" object="default/netcat" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set netcat-668db85669 to 1" I0221 08:56:16.371476 1 event.go:294] "Event occurred" object="default/netcat-668db85669" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: netcat-668db85669-gl7hj" * * ==> kube-proxy [8a0c30ea7fd7] <== * I0221 08:56:11.827853 1 node.go:163] Successfully retrieved node IP: 192.168.49.2 I0221 08:56:11.827946 1 server_others.go:138] "Detected node IP" address="192.168.49.2" I0221 08:56:11.827994 1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode="" I0221 08:56:12.009554 1 server_others.go:206] "Using iptables Proxier" I0221 08:56:12.019908 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4 I0221 08:56:12.019933 1 server_others.go:214] "Creating dualStackProxier for iptables" I0221 08:56:12.019962 1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6" I0221 08:56:12.020760 1 server.go:656] "Version info" version="v1.23.4" I0221 08:56:12.021637 1 config.go:226] "Starting endpoint slice config controller" I0221 08:56:12.021666 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I0221 08:56:12.021800 1 config.go:317] "Starting service config controller" I0221 08:56:12.021806 1 shared_informer.go:240] Waiting for caches to sync for service config I0221 08:56:12.122213 1 shared_informer.go:247] Caches are synced for service config I0221 08:56:12.122323 1 shared_informer.go:247] Caches are synced for endpoint slice config * * ==> kube-scheduler [d7932880a27c] <== * W0221 08:55:54.734855 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0221 08:55:54.735560 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0221 08:55:54.735489 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0221 08:55:54.735587 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0221 08:55:54.735153 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0221 08:55:54.735599 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0221 08:55:54.736340 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0221 08:55:54.736398 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0221 08:55:55.648728 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0221 08:55:55.648772 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0221 08:55:55.813293 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0221 08:55:55.813330 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0221 08:55:55.824829 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0221 08:55:55.824882 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0221 08:55:55.845490 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0221 08:55:55.845529 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0221 08:55:55.878719 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0221 08:55:55.878760 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0221 08:55:55.928333 1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0221 08:55:55.928442 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0221 08:55:56.004454 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0221 08:55:56.004493 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0221 08:55:56.004454 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0221 08:55:56.004520 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope I0221 08:55:58.230940 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Mon 2022-02-21 08:55:41 UTC, end at Mon 2022-02-21 09:02:42 UTC. -- Feb 21 08:56:12 false-20220221084934-6550 kubelet[1923]: I0221 08:56:12.409928 1923 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-9k8b6 through plugin: invalid network status for" Feb 21 08:56:12 false-20220221084934-6550 kubelet[1923]: I0221 08:56:12.664199 1923 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-snkv2 through plugin: invalid network status for" Feb 21 08:56:12 false-20220221084934-6550 kubelet[1923]: I0221 08:56:12.672723 1923 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-9k8b6 through plugin: invalid network status for" Feb 21 08:56:13 false-20220221084934-6550 kubelet[1923]: I0221 08:56:13.303997 1923 topology_manager.go:200] "Topology Admit Handler" Feb 21 08:56:13 false-20220221084934-6550 kubelet[1923]: I0221 08:56:13.320731 1923 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e58a0e76-397e-4653-82c8-a63621513203-tmp\") pod \"storage-provisioner\" (UID: \"e58a0e76-397e-4653-82c8-a63621513203\") " pod="kube-system/storage-provisioner" Feb 21 08:56:13 false-20220221084934-6550 kubelet[1923]: I0221 08:56:13.320776 1923 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmqwk\" (UniqueName: \"kubernetes.io/projected/e58a0e76-397e-4653-82c8-a63621513203-kube-api-access-hmqwk\") pod \"storage-provisioner\" (UID: \"e58a0e76-397e-4653-82c8-a63621513203\") " pod="kube-system/storage-provisioner" Feb 21 08:56:13 false-20220221084934-6550 kubelet[1923]: I0221 08:56:13.783913 1923 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="3329154f839c36499d1d5650e062fdeeefc2b87a21cdce53605eb8cc5deab440" Feb 21 08:56:16 false-20220221084934-6550 kubelet[1923]: I0221 08:56:16.375695 1923 topology_manager.go:200] "Topology Admit Handler" Feb 21 08:56:16 false-20220221084934-6550 kubelet[1923]: I0221 08:56:16.438066 1923 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcfgc\" (UniqueName: \"kubernetes.io/projected/ba6605ea-dfed-40ce-83bd-cbd1b3c35da1-kube-api-access-mcfgc\") pod \"netcat-668db85669-gl7hj\" (UID: \"ba6605ea-dfed-40ce-83bd-cbd1b3c35da1\") " pod="default/netcat-668db85669-gl7hj" Feb 21 08:56:17 false-20220221084934-6550 kubelet[1923]: I0221 08:56:17.017545 1923 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/netcat-668db85669-gl7hj through plugin: invalid network status for" Feb 21 08:56:17 false-20220221084934-6550 kubelet[1923]: I0221 08:56:17.017821 1923 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="9d884bdb5ec490729eecb9c8241e2a0b987899f13fac076c2daff26fe1d6cfb2" Feb 21 08:56:17 false-20220221084934-6550 kubelet[1923]: I0221 08:56:17.949572 1923 reconciler.go:192] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ca2a7a8-2903-47ca-bcf3-097175f8bc79-config-volume\") pod \"2ca2a7a8-2903-47ca-bcf3-097175f8bc79\" (UID: \"2ca2a7a8-2903-47ca-bcf3-097175f8bc79\") " Feb 21 08:56:17 false-20220221084934-6550 kubelet[1923]: I0221 08:56:17.949640 1923 reconciler.go:192] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2hhn\" (UniqueName: \"kubernetes.io/projected/2ca2a7a8-2903-47ca-bcf3-097175f8bc79-kube-api-access-x2hhn\") pod \"2ca2a7a8-2903-47ca-bcf3-097175f8bc79\" (UID: \"2ca2a7a8-2903-47ca-bcf3-097175f8bc79\") " Feb 21 08:56:17 false-20220221084934-6550 kubelet[1923]: W0221 08:56:17.949904 1923 empty_dir.go:517] Warning: Failed to clear quota on /var/lib/kubelet/pods/2ca2a7a8-2903-47ca-bcf3-097175f8bc79/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled Feb 21 08:56:17 false-20220221084934-6550 kubelet[1923]: I0221 08:56:17.951222 1923 operation_generator.go:910] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ca2a7a8-2903-47ca-bcf3-097175f8bc79-config-volume" (OuterVolumeSpecName: "config-volume") pod "2ca2a7a8-2903-47ca-bcf3-097175f8bc79" (UID: "2ca2a7a8-2903-47ca-bcf3-097175f8bc79"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:56:17 false-20220221084934-6550 kubelet[1923]: I0221 08:56:17.952198 1923 operation_generator.go:910] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ca2a7a8-2903-47ca-bcf3-097175f8bc79-kube-api-access-x2hhn" (OuterVolumeSpecName: "kube-api-access-x2hhn") pod "2ca2a7a8-2903-47ca-bcf3-097175f8bc79" (UID: "2ca2a7a8-2903-47ca-bcf3-097175f8bc79"). InnerVolumeSpecName "kube-api-access-x2hhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:56:18 false-20220221084934-6550 kubelet[1923]: I0221 08:56:18.032519 1923 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/netcat-668db85669-gl7hj through plugin: invalid network status for" Feb 21 08:56:18 false-20220221084934-6550 kubelet[1923]: I0221 08:56:18.032579 1923 scope.go:110] "RemoveContainer" containerID="f94f7f392e736bd87536f0490e3bf34465601e46c8d0581bbbe87d7fd75543c2" Feb 21 08:56:18 false-20220221084934-6550 kubelet[1923]: I0221 08:56:18.046523 1923 scope.go:110] "RemoveContainer" containerID="f94f7f392e736bd87536f0490e3bf34465601e46c8d0581bbbe87d7fd75543c2" Feb 21 08:56:18 false-20220221084934-6550 kubelet[1923]: E0221 08:56:18.047358 1923 remote_runtime.go:572] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: f94f7f392e736bd87536f0490e3bf34465601e46c8d0581bbbe87d7fd75543c2" containerID="f94f7f392e736bd87536f0490e3bf34465601e46c8d0581bbbe87d7fd75543c2" Feb 21 08:56:18 false-20220221084934-6550 kubelet[1923]: I0221 08:56:18.047421 1923 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:docker ID:f94f7f392e736bd87536f0490e3bf34465601e46c8d0581bbbe87d7fd75543c2} err="failed to get container status \"f94f7f392e736bd87536f0490e3bf34465601e46c8d0581bbbe87d7fd75543c2\": rpc error: code = Unknown desc = Error: No such container: f94f7f392e736bd87536f0490e3bf34465601e46c8d0581bbbe87d7fd75543c2" Feb 21 08:56:18 false-20220221084934-6550 kubelet[1923]: I0221 08:56:18.050827 1923 reconciler.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ca2a7a8-2903-47ca-bcf3-097175f8bc79-config-volume\") on node \"false-20220221084934-6550\" DevicePath \"\"" Feb 21 08:56:18 false-20220221084934-6550 kubelet[1923]: I0221 08:56:18.050880 1923 reconciler.go:300] "Volume detached for volume \"kube-api-access-x2hhn\" (UniqueName: \"kubernetes.io/projected/2ca2a7a8-2903-47ca-bcf3-097175f8bc79-kube-api-access-x2hhn\") on node \"false-20220221084934-6550\" DevicePath \"\"" Feb 21 08:56:20 false-20220221084934-6550 kubelet[1923]: I0221 08:56:20.026953 1923 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=2ca2a7a8-2903-47ca-bcf3-097175f8bc79 path="/var/lib/kubelet/pods/2ca2a7a8-2903-47ca-bcf3-097175f8bc79/volumes" Feb 21 08:56:21 false-20220221084934-6550 kubelet[1923]: I0221 08:56:21.069881 1923 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/netcat-668db85669-gl7hj through plugin: invalid network status for" * * ==> storage-provisioner [5ea2efd75138] <== * I0221 08:56:13.910678 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... I0221 08:56:13.919371 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service! I0221 08:56:13.919441 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I0221 08:56:13.928725 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath I0221 08:56:13.928915 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_false-20220221084934-6550_d7189e91-1926-46aa-822b-8ac81b49033a! I0221 08:56:13.929221 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"609f23d9-58d6-4551-87eb-7b3e8a7082a4", APIVersion:"v1", ResourceVersion:"482", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' false-20220221084934-6550_d7189e91-1926-46aa-822b-8ac81b49033a became leader I0221 08:56:14.029182 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_false-20220221084934-6550_d7189e91-1926-46aa-822b-8ac81b49033a! -- /stdout -- helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p false-20220221084934-6550 -n false-20220221084934-6550 helpers_test.go:262: (dbg) Run: kubectl --context false-20220221084934-6550 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running helpers_test.go:271: non-running pods: helpers_test.go:273: ======> post-mortem[TestNetworkPlugins/group/false]: describe non-running pods <====== helpers_test.go:276: (dbg) Run: kubectl --context false-20220221084934-6550 describe pod helpers_test.go:276: (dbg) Non-zero exit: kubectl --context false-20220221084934-6550 describe pod : exit status 1 (41.507944ms) ** stderr ** error: resource name may not be empty ** /stderr ** helpers_test.go:278: kubectl --context false-20220221084934-6550 describe pod : exit status 1 helpers_test.go:176: Cleaning up "false-20220221084934-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p false-20220221084934-6550 helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p false-20220221084934-6550: (2.910147752s) --- FAIL: TestNetworkPlugins/group/false (433.35s) === FAIL: . TestNetworkPlugins/group/custom-weave/Start (519.15s) net_test.go:99: (dbg) Run: out/minikube-linux-amd64 start -p custom-weave-20220221084934-6550 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker --container-runtime=docker net_test.go:99: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p custom-weave-20220221084934-6550 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker --container-runtime=docker: exit status 105 (8m39.119069821s) -- stdout -- * [custom-weave-20220221084934-6550] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64) - MINIKUBE_LOCATION=13641 - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube - MINIKUBE_BIN=out/minikube-linux-amd64 * Using the docker driver based on user configuration - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities * Starting control plane node custom-weave-20220221084934-6550 in cluster custom-weave-20220221084934-6550 * Pulling base image ... * Creating docker container (CPUs=2, Memory=2048MB) ... * Preparing Kubernetes v1.23.4 on Docker 20.10.12 ... - kubelet.housekeeping-interval=5m - Generating certificates and keys ... - Booting up control plane ... - Configuring RBAC rules ... * Configuring testdata/weavenet.yaml (Container Networking Interface) ... * Verifying Kubernetes components... - Using image gcr.io/k8s-minikube/storage-provisioner:v5 * Enabled addons: storage-provisioner, default-storageclass -- /stdout -- ** stderr ** I0221 08:54:47.458219 227869 out.go:297] Setting OutFile to fd 1 ... I0221 08:54:47.458326 227869 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 08:54:47.458338 227869 out.go:310] Setting ErrFile to fd 2... I0221 08:54:47.458344 227869 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 08:54:47.458503 227869 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/bin I0221 08:54:47.458917 227869 out.go:304] Setting JSON to false I0221 08:54:47.461070 227869 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2242,"bootTime":1645431446,"procs":806,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"} I0221 08:54:47.461183 227869 start.go:122] virtualization: kvm guest I0221 08:54:47.464031 227869 out.go:176] * [custom-weave-20220221084934-6550] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64) I0221 08:54:47.464153 227869 notify.go:193] Checking for updates... I0221 08:54:47.465465 227869 out.go:176] - MINIKUBE_LOCATION=13641 I0221 08:54:47.466737 227869 out.go:176] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true I0221 08:54:47.468108 227869 out.go:176] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 08:54:47.469317 227869 out.go:176] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube I0221 08:54:47.471589 227869 out.go:176] - MINIKUBE_BIN=out/minikube-linux-amd64 I0221 08:54:47.472040 227869 config.go:176] Loaded profile config "auto-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 08:54:47.472126 227869 config.go:176] Loaded profile config "calico-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 08:54:47.472199 227869 config.go:176] Loaded profile config "cilium-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 08:54:47.472247 227869 driver.go:344] Setting default libvirt URI to qemu:///system I0221 08:54:47.517461 227869 docker.go:132] docker version: linux-20.10.12 I0221 08:54:47.517586 227869 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 08:54:47.620138 227869 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2022-02-21 08:54:47.551657257 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 08:54:47.620271 227869 docker.go:237] overlay module found I0221 08:54:47.622372 227869 out.go:176] * Using the docker driver based on user configuration I0221 08:54:47.622397 227869 start.go:281] selected driver: docker I0221 08:54:47.622412 227869 start.go:798] validating driver "docker" against I0221 08:54:47.622433 227869 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} W0221 08:54:47.622515 227869 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. W0221 08:54:47.622540 227869 out.go:241] ! Your cgroup does not allow setting memory. ! Your cgroup does not allow setting memory. I0221 08:54:47.623978 227869 out.go:176] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities I0221 08:54:47.624791 227869 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 08:54:47.725034 227869 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2022-02-21 08:54:47.66170668 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 08:54:47.725164 227869 start_flags.go:288] no existing cluster config was found, will generate one from the flags I0221 08:54:47.725316 227869 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m I0221 08:54:47.725345 227869 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] I0221 08:54:47.725369 227869 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml" I0221 08:54:47.725389 227869 start_flags.go:297] Found "testdata/weavenet.yaml" CNI - setting NetworkPlugin=cni I0221 08:54:47.725399 227869 start_flags.go:302] config: {Name:custom-weave-20220221084934-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:custom-weave-20220221084934-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 08:54:47.727724 227869 out.go:176] * Starting control plane node custom-weave-20220221084934-6550 in cluster custom-weave-20220221084934-6550 I0221 08:54:47.727767 227869 cache.go:120] Beginning downloading kic base image for docker with docker I0221 08:54:47.729212 227869 out.go:176] * Pulling base image ... I0221 08:54:47.729243 227869 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 08:54:47.729280 227869 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4 I0221 08:54:47.729295 227869 cache.go:57] Caching tarball of preloaded images I0221 08:54:47.729343 227869 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon I0221 08:54:47.729540 227869 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0221 08:54:47.729557 227869 cache.go:60] Finished verifying existence of preloaded tar for v1.23.4 on docker I0221 08:54:47.729678 227869 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/config.json ... I0221 08:54:47.729700 227869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/config.json: {Name:mka893c0a5ff8738d3209de71a273b5ed5f8c7fb Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:54:47.776587 227869 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon, skipping pull I0221 08:54:47.776615 227869 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 exists in daemon, skipping load I0221 08:54:47.776635 227869 cache.go:208] Successfully downloaded all kic artifacts I0221 08:54:47.776674 227869 start.go:313] acquiring machines lock for custom-weave-20220221084934-6550: {Name:mk4ea336349dcf18d26ade5ee9a9024978187ca3 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 08:54:47.776813 227869 start.go:317] acquired machines lock for "custom-weave-20220221084934-6550" in 118.503µs I0221 08:54:47.776843 227869 start.go:89] Provisioning new machine with config: &{Name:custom-weave-20220221084934-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:custom-weave-20220221084934-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true} I0221 08:54:47.776919 227869 start.go:126] createHost starting for "" (driver="docker") I0221 08:54:47.779541 227869 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ... I0221 08:54:47.779787 227869 start.go:160] libmachine.API.Create for "custom-weave-20220221084934-6550" (driver="docker") I0221 08:54:47.779820 227869 client.go:168] LocalClient.Create starting I0221 08:54:47.779884 227869 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem I0221 08:54:47.779933 227869 main.go:130] libmachine: Decoding PEM data... I0221 08:54:47.779958 227869 main.go:130] libmachine: Parsing certificate... I0221 08:54:47.780028 227869 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem I0221 08:54:47.780052 227869 main.go:130] libmachine: Decoding PEM data... I0221 08:54:47.780078 227869 main.go:130] libmachine: Parsing certificate... I0221 08:54:47.780404 227869 cli_runner.go:133] Run: docker network inspect custom-weave-20220221084934-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0221 08:54:47.812283 227869 cli_runner.go:180] docker network inspect custom-weave-20220221084934-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0221 08:54:47.812354 227869 network_create.go:254] running [docker network inspect custom-weave-20220221084934-6550] to gather additional debugging logs... I0221 08:54:47.812371 227869 cli_runner.go:133] Run: docker network inspect custom-weave-20220221084934-6550 W0221 08:54:47.846261 227869 cli_runner.go:180] docker network inspect custom-weave-20220221084934-6550 returned with exit code 1 I0221 08:54:47.846317 227869 network_create.go:257] error running [docker network inspect custom-weave-20220221084934-6550]: docker network inspect custom-weave-20220221084934-6550: exit status 1 stdout: [] stderr: Error: No such network: custom-weave-20220221084934-6550 I0221 08:54:47.846350 227869 network_create.go:259] output of [docker network inspect custom-weave-20220221084934-6550]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: custom-weave-20220221084934-6550 ** /stderr ** I0221 08:54:47.846437 227869 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 08:54:47.880149 227869 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-8af72e223855 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:58:a5:dd:c8}} I0221 08:54:47.880989 227869 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc0006d4200] misses:0} I0221 08:54:47.881044 227869 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}} I0221 08:54:47.881059 227869 network_create.go:106] attempt to create docker network custom-weave-20220221084934-6550 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ... I0221 08:54:47.881116 227869 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220221084934-6550 I0221 08:54:47.951115 227869 network_create.go:90] docker network custom-weave-20220221084934-6550 192.168.58.0/24 created I0221 08:54:47.951148 227869 kic.go:106] calculated static IP "192.168.58.2" for the "custom-weave-20220221084934-6550" container I0221 08:54:47.951220 227869 cli_runner.go:133] Run: docker ps -a --format {{.Names}} I0221 08:54:47.991401 227869 cli_runner.go:133] Run: docker volume create custom-weave-20220221084934-6550 --label name.minikube.sigs.k8s.io=custom-weave-20220221084934-6550 --label created_by.minikube.sigs.k8s.io=true I0221 08:54:48.025554 227869 oci.go:102] Successfully created a docker volume custom-weave-20220221084934-6550 I0221 08:54:48.025643 227869 cli_runner.go:133] Run: docker run --rm --name custom-weave-20220221084934-6550-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220221084934-6550 --entrypoint /usr/bin/test -v custom-weave-20220221084934-6550:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -d /var/lib I0221 08:54:48.595681 227869 oci.go:106] Successfully prepared a docker volume custom-weave-20220221084934-6550 I0221 08:54:48.595760 227869 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 08:54:48.595785 227869 kic.go:179] Starting extracting preloaded images to volume ... I0221 08:54:48.595864 227869 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20220221084934-6550:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir I0221 08:54:54.606684 227869 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20220221084934-6550:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir: (6.010752765s) I0221 08:54:54.606731 227869 kic.go:188] duration metric: took 6.010943 seconds to extract preloaded images to volume W0221 08:54:54.606773 227869 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted. W0221 08:54:54.606787 227869 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. I0221 08:54:54.606827 227869 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'" I0221 08:54:54.713053 227869 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20220221084934-6550 --name custom-weave-20220221084934-6550 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220221084934-6550 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20220221084934-6550 --network custom-weave-20220221084934-6550 --ip 192.168.58.2 --volume custom-weave-20220221084934-6550:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 I0221 08:54:55.197249 227869 cli_runner.go:133] Run: docker container inspect custom-weave-20220221084934-6550 --format={{.State.Running}} I0221 08:54:55.251551 227869 cli_runner.go:133] Run: docker container inspect custom-weave-20220221084934-6550 --format={{.State.Status}} I0221 08:54:55.285366 227869 cli_runner.go:133] Run: docker exec custom-weave-20220221084934-6550 stat /var/lib/dpkg/alternatives/iptables I0221 08:54:55.364656 227869 oci.go:281] the created container "custom-weave-20220221084934-6550" has a running status. I0221 08:54:55.364693 227869 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/custom-weave-20220221084934-6550/id_rsa... I0221 08:54:55.460289 227869 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/custom-weave-20220221084934-6550/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0221 08:54:55.569379 227869 cli_runner.go:133] Run: docker container inspect custom-weave-20220221084934-6550 --format={{.State.Status}} I0221 08:54:55.607358 227869 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0221 08:54:55.607386 227869 kic_runner.go:114] Args: [docker exec --privileged custom-weave-20220221084934-6550 chown docker:docker /home/docker/.ssh/authorized_keys] I0221 08:54:55.707944 227869 cli_runner.go:133] Run: docker container inspect custom-weave-20220221084934-6550 --format={{.State.Status}} I0221 08:54:55.746584 227869 machine.go:88] provisioning docker machine ... I0221 08:54:55.746625 227869 ubuntu.go:169] provisioning hostname "custom-weave-20220221084934-6550" I0221 08:54:55.746679 227869 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220221084934-6550 I0221 08:54:55.782136 227869 main.go:130] libmachine: Using SSH client type: native I0221 08:54:55.782378 227869 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49369 } I0221 08:54:55.782408 227869 main.go:130] libmachine: About to run SSH command: sudo hostname custom-weave-20220221084934-6550 && echo "custom-weave-20220221084934-6550" | sudo tee /etc/hostname I0221 08:54:55.920475 227869 main.go:130] libmachine: SSH cmd err, output: : custom-weave-20220221084934-6550 I0221 08:54:55.920553 227869 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220221084934-6550 I0221 08:54:55.975664 227869 main.go:130] libmachine: Using SSH client type: native I0221 08:54:55.975866 227869 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49369 } I0221 08:54:55.975900 227869 main.go:130] libmachine: About to run SSH command: if ! grep -xq '.*\scustom-weave-20220221084934-6550' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-weave-20220221084934-6550/g' /etc/hosts; else echo '127.0.1.1 custom-weave-20220221084934-6550' | sudo tee -a /etc/hosts; fi fi I0221 08:54:56.102934 227869 main.go:130] libmachine: SSH cmd err, output: : I0221 08:54:56.102974 227869 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube} I0221 08:54:56.103020 227869 ubuntu.go:177] setting up certificates I0221 08:54:56.103036 227869 provision.go:83] configureAuth start I0221 08:54:56.103092 227869 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220221084934-6550 I0221 08:54:56.140749 227869 provision.go:138] copyHostCerts I0221 08:54:56.140814 227869 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem, removing ... I0221 08:54:56.140828 227869 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem I0221 08:54:56.140916 227869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem (1082 bytes) I0221 08:54:56.141002 227869 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem, removing ... I0221 08:54:56.141016 227869 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem I0221 08:54:56.141053 227869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem (1123 bytes) I0221 08:54:56.141122 227869 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem, removing ... I0221 08:54:56.141135 227869 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem I0221 08:54:56.141163 227869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem (1675 bytes) I0221 08:54:56.141225 227869 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem org=jenkins.custom-weave-20220221084934-6550 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube custom-weave-20220221084934-6550] I0221 08:54:56.326607 227869 provision.go:172] copyRemoteCerts I0221 08:54:56.326675 227869 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0221 08:54:56.326718 227869 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220221084934-6550 I0221 08:54:56.363092 227869 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/custom-weave-20220221084934-6550/id_rsa Username:docker} I0221 08:54:56.452714 227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes) I0221 08:54:56.472983 227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes) I0221 08:54:56.494894 227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0221 08:54:56.515723 227869 provision.go:86] duration metric: configureAuth took 412.669796ms I0221 08:54:56.515755 227869 ubuntu.go:193] setting minikube options for container-runtime I0221 08:54:56.515964 227869 config.go:176] Loaded profile config "custom-weave-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 08:54:56.516026 227869 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220221084934-6550 I0221 08:54:56.553857 227869 main.go:130] libmachine: Using SSH client type: native I0221 08:54:56.554015 227869 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49369 } I0221 08:54:56.554037 227869 main.go:130] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0221 08:54:56.675412 227869 main.go:130] libmachine: SSH cmd err, output: : overlay I0221 08:54:56.675444 227869 ubuntu.go:71] root file system type: overlay I0221 08:54:56.675646 227869 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0221 08:54:56.675703 227869 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220221084934-6550 I0221 08:54:56.714231 227869 main.go:130] libmachine: Using SSH client type: native I0221 08:54:56.714406 227869 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49369 } I0221 08:54:56.714509 227869 main.go:130] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %s "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0221 08:54:56.855829 227869 main.go:130] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0221 08:54:56.855929 227869 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220221084934-6550 I0221 08:54:56.893976 227869 main.go:130] libmachine: Using SSH client type: native I0221 08:54:56.894175 227869 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49369 } I0221 08:54:56.894198 227869 main.go:130] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0221 08:54:57.579128 227869 main.go:130] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2021-12-13 11:43:42.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2022-02-21 08:54:56.850898043 +0000 @@ -1,30 +1,32 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com +BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target -Requires=docker.socket containerd.service +Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always - -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 +Restart=on-failure -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +34,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0221 08:54:57.579162 227869 machine.go:91] provisioned docker machine in 1.832554133s I0221 08:54:57.579173 227869 client.go:171] LocalClient.Create took 9.799347142s I0221 08:54:57.579189 227869 start.go:168] duration metric: libmachine.API.Create for "custom-weave-20220221084934-6550" took 9.79940181s I0221 08:54:57.579201 227869 start.go:267] post-start starting for "custom-weave-20220221084934-6550" (driver="docker") I0221 08:54:57.579207 227869 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0221 08:54:57.579305 227869 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0221 08:54:57.579351 227869 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220221084934-6550 I0221 08:54:57.613063 227869 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/custom-weave-20220221084934-6550/id_rsa Username:docker} I0221 08:54:57.703066 227869 ssh_runner.go:195] Run: cat /etc/os-release I0221 08:54:57.705959 227869 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0221 08:54:57.705980 227869 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0221 08:54:57.705991 227869 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0221 08:54:57.705996 227869 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0221 08:54:57.706004 227869 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/addons for local assets ... I0221 08:54:57.706050 227869 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files for local assets ... I0221 08:54:57.706110 227869 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem -> 65502.pem in /etc/ssl/certs I0221 08:54:57.706179 227869 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs I0221 08:54:57.713029 227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /etc/ssl/certs/65502.pem (1708 bytes) I0221 08:54:57.731016 227869 start.go:270] post-start completed in 151.786403ms I0221 08:54:57.731352 227869 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220221084934-6550 I0221 08:54:57.764434 227869 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/config.json ... I0221 08:54:57.764715 227869 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0221 08:54:57.764768 227869 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220221084934-6550 I0221 08:54:57.796823 227869 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/custom-weave-20220221084934-6550/id_rsa Username:docker} I0221 08:54:57.883538 227869 start.go:129] duration metric: createHost completed in 10.106607266s I0221 08:54:57.883571 227869 start.go:80] releasing machines lock for "custom-weave-20220221084934-6550", held for 10.106740513s I0221 08:54:57.883662 227869 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220221084934-6550 I0221 08:54:57.916447 227869 ssh_runner.go:195] Run: systemctl --version I0221 08:54:57.916504 227869 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220221084934-6550 I0221 08:54:57.916539 227869 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/ I0221 08:54:57.916595 227869 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220221084934-6550 I0221 08:54:57.952282 227869 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/custom-weave-20220221084934-6550/id_rsa Username:docker} I0221 08:54:57.953012 227869 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/custom-weave-20220221084934-6550/id_rsa Username:docker} I0221 08:54:58.182655 227869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0221 08:54:58.192269 227869 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 08:54:58.201710 227869 cruntime.go:272] skipping containerd shutdown because we are bound to it I0221 08:54:58.201772 227869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0221 08:54:58.217490 227869 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0221 08:54:58.236241 227869 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0221 08:54:58.328534 227869 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0221 08:54:58.405690 227869 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 08:54:58.418618 227869 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0221 08:54:58.507435 227869 ssh_runner.go:195] Run: sudo systemctl start docker I0221 08:54:58.517435 227869 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 08:54:58.555565 227869 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 08:54:58.596881 227869 out.go:203] * Preparing Kubernetes v1.23.4 on Docker 20.10.12 ... I0221 08:54:58.596957 227869 cli_runner.go:133] Run: docker network inspect custom-weave-20220221084934-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 08:54:58.628733 227869 ssh_runner.go:195] Run: grep 192.168.58.1 host.minikube.internal$ /etc/hosts I0221 08:54:58.632087 227869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 08:54:58.643526 227869 out.go:176] - kubelet.housekeeping-interval=5m I0221 08:54:58.643605 227869 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 08:54:58.643653 227869 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0221 08:54:58.675389 227869 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.4 k8s.gcr.io/kube-proxy:v1.23.4 k8s.gcr.io/kube-scheduler:v1.23.4 k8s.gcr.io/kube-controller-manager:v1.23.4 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0221 08:54:58.675418 227869 docker.go:537] Images already preloaded, skipping extraction I0221 08:54:58.675488 227869 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0221 08:54:58.708483 227869 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.4 k8s.gcr.io/kube-proxy:v1.23.4 k8s.gcr.io/kube-scheduler:v1.23.4 k8s.gcr.io/kube-controller-manager:v1.23.4 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0221 08:54:58.708509 227869 cache_images.go:84] Images are preloaded, skipping loading I0221 08:54:58.708561 227869 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0221 08:54:58.791115 227869 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml" I0221 08:54:58.791158 227869 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0221 08:54:58.791174 227869 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-weave-20220221084934-6550 NodeName:custom-weave-20220221084934-6550 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0221 08:54:58.791341 227869 kubeadm.go:162] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.58.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "custom-weave-20220221084934-6550" kubeletExtraArgs: node-ip: 192.168.58.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.58.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.23.4 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%" nodefs.inodesFree: "0%" imagefs.available: "0%" failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0221 08:54:58.791445 227869 kubeadm.go:936] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.23.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=custom-weave-20220221084934-6550 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 [Install] config: {KubernetesVersion:v1.23.4 ClusterName:custom-weave-20220221084934-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} I0221 08:54:58.791498 227869 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.4 I0221 08:54:58.798800 227869 binaries.go:44] Found k8s binaries, skipping transfer I0221 08:54:58.799251 227869 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0221 08:54:58.807147 227869 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (406 bytes) I0221 08:54:58.820224 227869 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0221 08:54:58.833088 227869 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2054 bytes) I0221 08:54:58.846338 227869 ssh_runner.go:195] Run: grep 192.168.58.2 control-plane.minikube.internal$ /etc/hosts I0221 08:54:58.849240 227869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 08:54:58.858694 227869 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550 for IP: 192.168.58.2 I0221 08:54:58.858805 227869 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key I0221 08:54:58.858840 227869 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key I0221 08:54:58.858885 227869 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/client.key I0221 08:54:58.858898 227869 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/client.crt with IP's: [] I0221 08:54:59.108630 227869 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/client.crt ... I0221 08:54:59.108671 227869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/client.crt: {Name:mk10a31cfb47f6cf3f7da307f7bac4d74ffcf445 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:54:59.108910 227869 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/client.key ... I0221 08:54:59.108933 227869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/client.key: {Name:mke61651e1bae31960788075de046902ba3a384d Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:54:59.109066 227869 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/apiserver.key.cee25041 I0221 08:54:59.109088 227869 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1] I0221 08:54:59.505500 227869 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/apiserver.crt.cee25041 ... I0221 08:54:59.505538 227869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/apiserver.crt.cee25041: {Name:mkbc006409aa5d703ce8a53644ff64d9eca16a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:54:59.505785 227869 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/apiserver.key.cee25041 ... I0221 08:54:59.505805 227869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/apiserver.key.cee25041: {Name:mkad1017a3ef8cd68460d4665ab5aa6e577c7d2d Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:54:59.505895 227869 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/apiserver.crt I0221 08:54:59.505949 227869 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/apiserver.key I0221 08:54:59.506011 227869 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/proxy-client.key I0221 08:54:59.506028 227869 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/proxy-client.crt with IP's: [] I0221 08:54:59.595538 227869 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/proxy-client.crt ... I0221 08:54:59.595578 227869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/proxy-client.crt: {Name:mk42c1b2b0663ef91b5f6118e4e09fad281d7665 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:54:59.595806 227869 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/proxy-client.key ... I0221 08:54:59.595823 227869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/proxy-client.key: {Name:mk2f72a2c489551e30437a2aea9d0cb930af0fc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:54:59.595993 227869 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem (1338 bytes) W0221 08:54:59.596029 227869 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550_empty.pem, impossibly tiny 0 bytes I0221 08:54:59.596043 227869 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem (1679 bytes) I0221 08:54:59.596096 227869 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem (1082 bytes) I0221 08:54:59.596127 227869 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem (1123 bytes) I0221 08:54:59.596151 227869 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem (1675 bytes) I0221 08:54:59.596191 227869 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem (1708 bytes) I0221 08:54:59.597036 227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0221 08:54:59.616277 227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes) I0221 08:54:59.637516 227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0221 08:54:59.655614 227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I0221 08:54:59.673516 227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0221 08:54:59.691562 227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0221 08:54:59.709384 227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0221 08:54:59.731673 227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0221 08:54:59.749383 227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0221 08:54:59.768558 227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem --> /usr/share/ca-certificates/6550.pem (1338 bytes) I0221 08:54:59.785931 227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /usr/share/ca-certificates/65502.pem (1708 bytes) I0221 08:54:59.803428 227869 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0221 08:54:59.816515 227869 ssh_runner.go:195] Run: openssl version I0221 08:54:59.821519 227869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6550.pem && ln -fs /usr/share/ca-certificates/6550.pem /etc/ssl/certs/6550.pem" I0221 08:54:59.829127 227869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6550.pem I0221 08:54:59.832411 227869 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Feb 21 08:30 /usr/share/ca-certificates/6550.pem I0221 08:54:59.832456 227869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6550.pem I0221 08:54:59.837155 227869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6550.pem /etc/ssl/certs/51391683.0" I0221 08:54:59.844619 227869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65502.pem && ln -fs /usr/share/ca-certificates/65502.pem /etc/ssl/certs/65502.pem" I0221 08:54:59.852034 227869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65502.pem I0221 08:54:59.855268 227869 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Feb 21 08:30 /usr/share/ca-certificates/65502.pem I0221 08:54:59.855304 227869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65502.pem I0221 08:54:59.860269 227869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65502.pem /etc/ssl/certs/3ec20f2e.0" I0221 08:54:59.867781 227869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0221 08:54:59.875277 227869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0221 08:54:59.878320 227869 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb 21 08:26 /usr/share/ca-certificates/minikubeCA.pem I0221 08:54:59.878371 227869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0221 08:54:59.883480 227869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0221 08:54:59.891452 227869 kubeadm.go:391] StartCluster: {Name:custom-weave-20220221084934-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:custom-weave-20220221084934-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 08:54:59.891586 227869 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0221 08:54:59.924799 227869 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0221 08:54:59.932091 227869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0221 08:54:59.939371 227869 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver I0221 08:54:59.939430 227869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0221 08:54:59.947372 227869 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0221 08:54:59.947423 227869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0221 08:55:00.482705 227869 out.go:203] - Generating certificates and keys ... I0221 08:55:03.685435 227869 out.go:203] - Booting up control plane ... I0221 08:55:10.727547 227869 out.go:203] - Configuring RBAC rules ... I0221 08:55:11.151901 227869 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml" I0221 08:55:11.154044 227869 out.go:176] * Configuring testdata/weavenet.yaml (Container Networking Interface) ... I0221 08:55:11.154111 227869 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.4/kubectl ... I0221 08:55:11.154161 227869 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml I0221 08:55:11.207872 227869 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/tmp/minikube/cni.yaml': No such file or directory I0221 08:55:11.207908 227869 ssh_runner.go:362] scp testdata/weavenet.yaml --> /var/tmp/minikube/cni.yaml (10948 bytes) I0221 08:55:11.231141 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml I0221 08:55:12.304984 227869 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.073803299s) I0221 08:55:12.305050 227869 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0221 08:55:12.305176 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:12.305176 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=9a4c39b14752448ee8b5da7f8bb397c2a16d9ea9 minikube.k8s.io/name=custom-weave-20220221084934-6550 minikube.k8s.io/updated_at=2022_02_21T08_55_12_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:12.403260 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:12.403289 227869 ops.go:34] apiserver oom_adj: -16 I0221 08:55:12.963301 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:13.462762 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:13.963185 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:14.463531 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:14.962764 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:15.463397 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:15.963546 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:16.462752 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:16.963400 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:17.463637 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:17.963168 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:18.463128 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:18.962774 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:19.463663 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:19.962811 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:20.463551 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:20.963554 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:21.463298 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:21.963457 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:22.463549 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:22.963434 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:23.463347 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:23.962843 227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:24.019474 227869 kubeadm.go:1020] duration metric: took 11.714385799s to wait for elevateKubeSystemPrivileges. I0221 08:55:24.019508 227869 kubeadm.go:393] StartCluster complete in 24.128063045s I0221 08:55:24.019531 227869 settings.go:142] acquiring lock: {Name:mk4400923ef35d7d80e21aa000bc7683aef0fb09 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:55:24.019619 227869 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 08:55:24.020875 227869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig: {Name:mkbaf7b3f4ffbe8a9d57707d423380523fa909e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:} W0221 08:55:24.035745 227869 kapi.go:233] failed rescaling deployment, will retry: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again I0221 08:55:25.038511 227869 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "custom-weave-20220221084934-6550" rescaled to 1 I0221 08:55:25.038569 227869 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true} I0221 08:55:25.041496 227869 out.go:176] * Verifying Kubernetes components... I0221 08:55:25.038653 227869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0221 08:55:25.041566 227869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0221 08:55:25.038656 227869 addons.go:415] enableAddons start: toEnable=map[], additional=[] I0221 08:55:25.041635 227869 addons.go:65] Setting storage-provisioner=true in profile "custom-weave-20220221084934-6550" I0221 08:55:25.039253 227869 config.go:176] Loaded profile config "custom-weave-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 08:55:25.041657 227869 addons.go:153] Setting addon storage-provisioner=true in "custom-weave-20220221084934-6550" W0221 08:55:25.041668 227869 addons.go:165] addon storage-provisioner should already be in state true I0221 08:55:25.041708 227869 host.go:66] Checking if "custom-weave-20220221084934-6550" exists ... I0221 08:55:25.041706 227869 addons.go:65] Setting default-storageclass=true in profile "custom-weave-20220221084934-6550" I0221 08:55:25.041747 227869 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-weave-20220221084934-6550" I0221 08:55:25.042057 227869 cli_runner.go:133] Run: docker container inspect custom-weave-20220221084934-6550 --format={{.State.Status}} I0221 08:55:25.042294 227869 cli_runner.go:133] Run: docker container inspect custom-weave-20220221084934-6550 --format={{.State.Status}} I0221 08:55:25.057925 227869 node_ready.go:35] waiting up to 5m0s for node "custom-weave-20220221084934-6550" to be "Ready" ... I0221 08:55:25.062489 227869 node_ready.go:49] node "custom-weave-20220221084934-6550" has status "Ready":"True" I0221 08:55:25.062517 227869 node_ready.go:38] duration metric: took 4.554004ms waiting for node "custom-weave-20220221084934-6550" to be "Ready" ... I0221 08:55:25.062529 227869 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 08:55:25.075842 227869 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-fw5hd" in "kube-system" namespace to be "Ready" ... I0221 08:55:25.091233 227869 out.go:176] - Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0221 08:55:25.091370 227869 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml I0221 08:55:25.091386 227869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0221 08:55:25.091440 227869 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220221084934-6550 I0221 08:55:25.103387 227869 addons.go:153] Setting addon default-storageclass=true in "custom-weave-20220221084934-6550" W0221 08:55:25.103416 227869 addons.go:165] addon default-storageclass should already be in state true I0221 08:55:25.103439 227869 host.go:66] Checking if "custom-weave-20220221084934-6550" exists ... I0221 08:55:25.103789 227869 cli_runner.go:133] Run: docker container inspect custom-weave-20220221084934-6550 --format={{.State.Status}} I0221 08:55:25.136464 227869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.58.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0221 08:55:25.138654 227869 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/custom-weave-20220221084934-6550/id_rsa Username:docker} I0221 08:55:25.154985 227869 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml I0221 08:55:25.155049 227869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0221 08:55:25.155102 227869 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220221084934-6550 I0221 08:55:25.188302 227869 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/custom-weave-20220221084934-6550/id_rsa Username:docker} I0221 08:55:25.323710 227869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0221 08:55:25.509102 227869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0221 08:55:25.628703 227869 start.go:777] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS I0221 08:55:26.031236 227869 out.go:176] * Enabled addons: storage-provisioner, default-storageclass I0221 08:55:26.031270 227869 addons.go:417] enableAddons completed in 992.622832ms I0221 08:55:27.093638 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:29.095472 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:31.106114 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:33.593883 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:35.603309 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:38.094303 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:40.594209 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:43.094975 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:45.594422 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:48.094138 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:50.094339 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:52.593954 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:55.094041 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:57.094158 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:55:59.594464 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:01.594499 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:03.595044 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:06.096228 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:08.594008 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:10.594274 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:12.594837 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:15.094474 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:17.095174 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:19.595203 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:22.094022 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:24.094532 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:26.594351 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:29.094290 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:31.595545 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:34.094168 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:36.094581 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:38.593443 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:40.593849 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:42.594084 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:44.594768 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:47.093943 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:49.593364 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:51.593995 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:53.594291 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:55.594982 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:56:57.595281 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:00.095968 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:02.593875 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:05.095863 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:07.593598 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:09.595599 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:11.600301 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:14.093831 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:16.094542 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:18.094583 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:20.594516 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:23.094746 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:25.094898 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:27.096067 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:29.594682 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:31.595072 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:34.093783 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:36.095122 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:38.593566 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:40.593916 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:42.594575 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:44.594678 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:46.594775 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:49.093600 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:51.093716 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:53.594138 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:55.594195 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:57:58.094464 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:00.594174 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:03.094260 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:05.097983 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:07.594946 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:10.095115 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:12.593715 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:14.594295 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:17.097192 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:19.593497 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:21.593740 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:23.594026 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:26.094324 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:28.594956 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:31.094580 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:33.593910 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:35.595299 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:38.093960 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:40.094102 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:42.095073 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:44.593597 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:46.594499 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:48.594616 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:50.594840 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:53.094539 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:55.094604 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:57.593439 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:58:59.593598 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:01.594070 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:04.094375 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:06.593739 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:08.594057 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:10.594906 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:12.595167 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:15.094611 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:17.594243 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:20.094535 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:22.095445 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:24.593641 227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False" I0221 08:59:25.099642 227869 pod_ready.go:81] duration metric: took 4m0.023714023s waiting for pod "coredns-64897985d-fw5hd" in "kube-system" namespace to be "Ready" ... E0221 08:59:25.099664 227869 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition I0221 08:59:25.099673 227869 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-kn627" in "kube-system" namespace to be "Ready" ... I0221 08:59:25.101152 227869 pod_ready.go:97] error getting pod "coredns-64897985d-kn627" in "kube-system" namespace (skipping!): pods "coredns-64897985d-kn627" not found I0221 08:59:25.101173 227869 pod_ready.go:81] duration metric: took 1.494584ms waiting for pod "coredns-64897985d-kn627" in "kube-system" namespace to be "Ready" ... E0221 08:59:25.101182 227869 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-64897985d-kn627" in "kube-system" namespace (skipping!): pods "coredns-64897985d-kn627" not found I0221 08:59:25.101190 227869 pod_ready.go:78] waiting up to 5m0s for pod "etcd-custom-weave-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:25.105178 227869 pod_ready.go:92] pod "etcd-custom-weave-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 08:59:25.105196 227869 pod_ready.go:81] duration metric: took 3.99997ms waiting for pod "etcd-custom-weave-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:25.105204 227869 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-custom-weave-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:25.109930 227869 pod_ready.go:92] pod "kube-apiserver-custom-weave-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 08:59:25.109949 227869 pod_ready.go:81] duration metric: took 4.739462ms waiting for pod "kube-apiserver-custom-weave-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:25.109958 227869 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-custom-weave-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:25.292675 227869 pod_ready.go:92] pod "kube-controller-manager-custom-weave-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 08:59:25.292711 227869 pod_ready.go:81] duration metric: took 182.734028ms waiting for pod "kube-controller-manager-custom-weave-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:25.292723 227869 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-q4stn" in "kube-system" namespace to be "Ready" ... I0221 08:59:25.691815 227869 pod_ready.go:92] pod "kube-proxy-q4stn" in "kube-system" namespace has status "Ready":"True" I0221 08:59:25.691839 227869 pod_ready.go:81] duration metric: took 399.108423ms waiting for pod "kube-proxy-q4stn" in "kube-system" namespace to be "Ready" ... I0221 08:59:25.691848 227869 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-custom-weave-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:26.092539 227869 pod_ready.go:92] pod "kube-scheduler-custom-weave-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 08:59:26.092566 227869 pod_ready.go:81] duration metric: took 400.710732ms waiting for pod "kube-scheduler-custom-weave-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:26.092579 227869 pod_ready.go:78] waiting up to 5m0s for pod "weave-net-dgkzh" in "kube-system" namespace to be "Ready" ... I0221 08:59:28.498990 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:30.998871 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:33.499218 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:35.998834 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:38.498252 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:40.499308 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:42.998921 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:45.498291 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:47.498914 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:49.998220 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:51.999087 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:53.999129 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:56.497881 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 08:59:58.498148 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:00.999242 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:03.498525 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:05.999154 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:08.498881 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:10.998464 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:12.998682 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:14.999363 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:17.498767 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:19.499481 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:21.998971 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:24.499960 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:26.999269 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:29.499198 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:31.998892 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:33.999959 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:36.498439 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:38.998551 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:40.998664 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:42.999010 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:45.498414 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:47.498620 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:49.998601 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:51.999470 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:54.499043 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:56.499562 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:00:58.998197 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:00.998372 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:02.999674 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:05.499244 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:07.998930 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:10.499101 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:12.499436 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:14.998244 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:16.998957 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:19.499569 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:21.503811 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:23.998532 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:26.001410 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:28.497652 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:30.497882 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:32.498505 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:34.499389 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:36.998781 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:39.497987 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:41.999075 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:43.999131 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:45.999453 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:48.498612 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:50.502349 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:53.000328 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:55.498350 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:57.498897 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:01:59.998589 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:02.498112 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:04.499166 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:06.499366 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:08.998138 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:10.998798 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:12.998867 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:14.999708 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:17.499134 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:19.998038 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:21.999415 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:24.503262 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:26.998872 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:28.999023 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:31.498312 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:33.498493 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:35.999270 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:38.499111 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:40.998862 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:43.499053 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:45.499484 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:47.499802 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:49.999065 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:51.999352 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:54.503567 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:56.998735 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:58.999291 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:00.999500 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:03.001366 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:05.498670 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:07.499251 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:09.998225 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:11.999084 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:14.499690 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:16.998485 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:19.498295 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:21.498521 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:23.499957 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:25.998718 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:26.503352 227869 pod_ready.go:81] duration metric: took 4m0.410759109s waiting for pod "weave-net-dgkzh" in "kube-system" namespace to be "Ready" ... E0221 09:03:26.503375 227869 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition I0221 09:03:26.503381 227869 pod_ready.go:38] duration metric: took 8m1.440836229s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 09:03:26.503404 227869 api_server.go:51] waiting for apiserver process to appear ... I0221 09:03:26.505928 227869 out.go:176] W0221 09:03:26.506107 227869 out.go:241] X Exiting due to K8S_APISERVER_MISSING: wait 5m0s for node: wait for apiserver proc: apiserver process never appeared X Exiting due to K8S_APISERVER_MISSING: wait 5m0s for node: wait for apiserver proc: apiserver process never appeared W0221 09:03:26.506213 227869 out.go:241] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled W0221 09:03:26.506230 227869 out.go:241] * Related issues: * Related issues: W0221 09:03:26.506275 227869 out.go:241] - https://github.com/kubernetes/minikube/issues/4536 - https://github.com/kubernetes/minikube/issues/4536 W0221 09:03:26.506318 227869 out.go:241] - https://github.com/kubernetes/minikube/issues/6014 - https://github.com/kubernetes/minikube/issues/6014 I0221 09:03:26.507855 227869 out.go:176] ** /stderr ** net_test.go:101: failed start: exit status 105 --- FAIL: TestNetworkPlugins/group/custom-weave/Start (519.15s) === FAIL: . TestNetworkPlugins/group/custom-weave (524.66s) net_test.go:154: skipping remaining tests for weave, as results can be unpredictable panic.go:642: *** TestNetworkPlugins/group/custom-weave FAILED at 2022-02-21 09:03:26.546071833 +0000 UTC m=+2299.308391427 helpers_test.go:223: -----------------------post-mortem-------------------------------- helpers_test.go:231: ======> post-mortem[TestNetworkPlugins/group/custom-weave]: docker inspect <====== helpers_test.go:232: (dbg) Run: docker inspect custom-weave-20220221084934-6550 helpers_test.go:236: (dbg) docker inspect custom-weave-20220221084934-6550: -- stdout -- [ { "Id": "59cfea5eeecfb7dfe576375d21e85fc78af4f71182d8a5debd64cf9fff24e0fa", "Created": "2022-02-21T08:54:54.750983019Z", "Path": "/usr/local/bin/entrypoint", "Args": [ "/sbin/init" ], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 229111, "ExitCode": 0, "Error": "", "StartedAt": "2022-02-21T08:54:55.188353195Z", "FinishedAt": "0001-01-01T00:00:00Z" }, "Image": "sha256:1312ccd2422d964b2df363d606d0c016d6acbc1ddf0211c26a74717f2897dc43", "ResolvConfPath": "/var/lib/docker/containers/59cfea5eeecfb7dfe576375d21e85fc78af4f71182d8a5debd64cf9fff24e0fa/resolv.conf", "HostnamePath": "/var/lib/docker/containers/59cfea5eeecfb7dfe576375d21e85fc78af4f71182d8a5debd64cf9fff24e0fa/hostname", "HostsPath": "/var/lib/docker/containers/59cfea5eeecfb7dfe576375d21e85fc78af4f71182d8a5debd64cf9fff24e0fa/hosts", "LogPath": "/var/lib/docker/containers/59cfea5eeecfb7dfe576375d21e85fc78af4f71182d8a5debd64cf9fff24e0fa/59cfea5eeecfb7dfe576375d21e85fc78af4f71182d8a5debd64cf9fff24e0fa-json.log", "Name": "/custom-weave-20220221084934-6550", "RestartCount": 0, "Driver": "overlay2", "Platform": "linux", "MountLabel": "", "ProcessLabel": "", "AppArmorProfile": "unconfined", "ExecIDs": null, "HostConfig": { "Binds": [ "/lib/modules:/lib/modules:ro", "custom-weave-20220221084934-6550:/var" ], "ContainerIDFile": "", "LogConfig": { "Type": "json-file", "Config": {} }, "NetworkMode": "custom-weave-20220221084934-6550", "PortBindings": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ] }, "RestartPolicy": { "Name": "no", "MaximumRetryCount": 0 }, "AutoRemove": false, "VolumeDriver": "", "VolumesFrom": null, "CapAdd": null, "CapDrop": null, "CgroupnsMode": "host", "Dns": [], "DnsOptions": [], "DnsSearch": [], "ExtraHosts": null, "GroupAdd": null, "IpcMode": "private", "Cgroup": "", "Links": null, "OomScoreAdj": 0, "PidMode": "", "Privileged": true, "PublishAllPorts": false, "ReadonlyRootfs": false, "SecurityOpt": [ "seccomp=unconfined", "apparmor=unconfined", "label=disable" ], "Tmpfs": { "/run": "", "/tmp": "" }, "UTSMode": "", "UsernsMode": "", "ShmSize": 67108864, "Runtime": "runc", "ConsoleSize": [ 0, 0 ], "Isolation": "", "CpuShares": 0, "Memory": 0, "NanoCpus": 2000000000, "CgroupParent": "", "BlkioWeight": 0, "BlkioWeightDevice": [], "BlkioDeviceReadBps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteIOps": null, "CpuPeriod": 0, "CpuQuota": 0, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": [ { "PathOnHost": "/dev/fuse", "PathInContainer": "/dev/fuse", "CgroupPermissions": "rwm" } ], "DeviceCgroupRules": null, "DeviceRequests": null, "KernelMemory": 0, "KernelMemoryTCP": 0, "MemoryReservation": 0, "MemorySwap": 0, "MemorySwappiness": null, "OomKillDisable": false, "PidsLimit": null, "Ulimits": null, "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0, "MaskedPaths": null, "ReadonlyPaths": null }, "GraphDriver": { "Data": { "LowerDir": "/var/lib/docker/overlay2/54b9b5451bf28759f69abe623a2ca44ff5d4c0423a88af11292a67226381fffb-init/diff:/var/lib/docker/overlay2/a149cc5f42de5038ba3e5f21bd82a6356aa9456aec11a3690e569db72ca8eaf5/diff:/var/lib/docker/overlay2/fce4ef34d92ff253276822af091ec16578b924486e0e565c7f34b479bdbe1606/diff:/var/lib/docker/overlay2/a38e10ed288ad73a8ce637c6219d5b13318bb72210ce4fb6291dd57d238df611/diff:/var/lib/docker/overlay2/93c18ddce49a4eb2d344b0579d7f58df999d53bb329042f78a5b3f40296f83cd/diff:/var/lib/docker/overlay2/d9ee14262b31fff87924cf913d835b02182e6a8dc3783ecb5f67dbd29faa079c/diff:/var/lib/docker/overlay2/8a0b39fc5ebcb9761ad433102244050fab339d3e027a5a30a7e01d121fe362e4/diff:/var/lib/docker/overlay2/7a6b41deef37b6400fac0c94871f73040d4cae157de2e929b7a0bf96f0e353aa/diff:/var/lib/docker/overlay2/49d913ee587718efcf521a65de0109df5c2c42644f75690beed97a437d623b66/diff:/var/lib/docker/overlay2/89e8fa960313dfb6ece4007576bf5d89e2b0ea68a2bde9de5527647a7df6ff0f/diff:/var/lib/docker/overlay2/0d2632e6ee2f4198f3c7c0d37c31ee03d33864e0131ae85dd3001adecfae53db/diff:/var/lib/docker/overlay2/b28a3b2bbe2e231d00863aaaf6be28d2e5a76197f2be7934f18720a71f2435a8/diff:/var/lib/docker/overlay2/6f88e83ff5bb26bf909248d8df5eacf4d01f72523927fd7433a925c525fbd274/diff:/var/lib/docker/overlay2/c2547d18d2639fa842b0977952fa69cffd7b076e8db357ef0ec62fae9676619b/diff:/var/lib/docker/overlay2/34465ba37bb8eef87906c38cb891a738c54a16dbf47d27fef95cee96f0b62c4f/diff:/var/lib/docker/overlay2/50d3383518bb8e6a01e1cfcae621b8c2238dc54ae94d63606d145633045bc6cd/diff:/var/lib/docker/overlay2/83561f107aed2bf9cb840ff66aceb882d0e39bcced79ef27ce67c17e063ba536/diff:/var/lib/docker/overlay2/bfb7c0c91e0a7649286abe3a6b02df4585a931ffa3a6b1e4846aa1b98eb0d997/diff:/var/lib/docker/overlay2/80589044da151f3d3eed7824d236bc8a19697182abca8610eb34f1c3cae53bf0/diff:/var/lib/docker/overlay2/5a336e956e467ad50830aa4b9ed5b65cb3ef08e6d49a4900acd0440ca3c03c3a/diff:/var/lib/docker/overlay2/f7b3446c0dca80a2053183e360de8f99e3a31e87ad633fcc23f14b2045643596/diff:/var/lib/docker/overlay2/03688a16272f13f172ed8baa181df2a1702e08edbc9c5a4a767265adb2c00b79/diff:/var/lib/docker/overlay2/c5445afaabab58221dbffedde261b3f9948116ccba1344b6f3dd4fdfaf5c98cd/diff:/var/lib/docker/overlay2/704464fd20f23f93d0ad3c3c58103f01fc8a0d76d70bc191f718273e7412bb42/diff:/var/lib/docker/overlay2/a344f9ca1bf1b674ebf560c2a0498913ed90c836e3b4f5b1968022f7f7e54055/diff:/var/lib/docker/overlay2/929f053e6a327a7203e8d8c281f827d02ad922fdabbdfcc5453003daba88cdf4/diff:/var/lib/docker/overlay2/d6a160f0ffe94e5abf99d928a638f6a6220b24c7ba643a438c7557146d240f70/diff:/var/lib/docker/overlay2/c6b58e187a2df10fd55e19e6ad38c20f5ebf5e62617a59b6ce4afe0fc712c0f6/diff:/var/lib/docker/overlay2/fef43deeff878af42cc1f31a1f265b66e642aa166cec02eba5a0c82fb6d70872/diff:/var/lib/docker/overlay2/0d7bbf19182056a05efca89485df54da5fba9903000517fc9e7c2072aa7ba62e/diff:/var/lib/docker/overlay2/5840748964e9ce3890a2fa4fc2e3fb0635426f944b42b3128808a7aca816bf48/diff:/var/lib/docker/overlay2/a94126962e9f346960e3f570b96e0a6e987d255fcba30bf3fd76b227e66275c6/diff:/var/lib/docker/overlay2/211ba88e1557a352838b4896dff8efac70330c1424cff237613c4186994ec037/diff:/var/lib/docker/overlay2/c04720d23411d6693832990e2c79bc529ade609401eaeee8d159f5a6d74d986e/diff:/var/lib/docker/overlay2/70bfe328c89d3a3902a4ba591874fe936425b208ec8e39c2e25d60f21b212300/diff:/var/lib/docker/overlay2/7dcce8bec6dd9561387116c9f724b9101e94e15c0e19c7880ce2b4e96b2921a6/diff:/var/lib/docker/overlay2/81463813805e91be42d9f71b5c63823a80abaaa42d8638034eaddcd66c148310/diff:/var/lib/docker/overlay2/c7dbb1344cf7167b07d58d49902e0db9335bc92a71a324c15dadc14c0aa67c45/diff:/var/lib/docker/overlay2/909df0d6b92b92bcf6e376f0e33d120c53cd24ad0b997d31b5a84f3d1ab89769/diff:/var/lib/docker/overlay2/eacfeaee6ed4aa14e847adad2c6d03439c53944654cc2779cc5c80d239ab323c/diff:/var/lib/docker/overlay2/48c58492169f9043495b11b47439f4dcdeb168525e5bb3a8c130c1ddd5ec882d/diff:/var/lib/docker/overlay2/4520a69f91e766b7694473c2107cc0644394d65da900259eb5933779accdc86d/diff:/var/lib/docker/overlay2/68aafdd300c4b4ebf94976a352959e75fa170653a69e13da38650a447cd5cb2f/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394cfc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff", "MergedDir": "/var/lib/docker/overlay2/54b9b5451bf28759f69abe623a2ca44ff5d4c0423a88af11292a67226381fffb/merged", "UpperDir": "/var/lib/docker/overlay2/54b9b5451bf28759f69abe623a2ca44ff5d4c0423a88af11292a67226381fffb/diff", "WorkDir": "/var/lib/docker/overlay2/54b9b5451bf28759f69abe623a2ca44ff5d4c0423a88af11292a67226381fffb/work" }, "Name": "overlay2" }, "Mounts": [ { "Type": "bind", "Source": "/lib/modules", "Destination": "/lib/modules", "Mode": "ro", "RW": false, "Propagation": "rprivate" }, { "Type": "volume", "Name": "custom-weave-20220221084934-6550", "Source": "/var/lib/docker/volumes/custom-weave-20220221084934-6550/_data", "Destination": "/var", "Driver": "local", "Mode": "z", "RW": true, "Propagation": "" } ], "Config": { "Hostname": "custom-weave-20220221084934-6550", "Domainname": "", "User": "root", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "ExposedPorts": { "22/tcp": {}, "2376/tcp": {}, "32443/tcp": {}, "5000/tcp": {}, "8443/tcp": {} }, "Tty": true, "OpenStdin": false, "StdinOnce": false, "Env": [ "container=docker", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" ], "Cmd": null, "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2", "Volumes": null, "WorkingDir": "", "Entrypoint": [ "/usr/local/bin/entrypoint", "/sbin/init" ], "OnBuild": null, "Labels": { "created_by.minikube.sigs.k8s.io": "true", "mode.minikube.sigs.k8s.io": "custom-weave-20220221084934-6550", "name.minikube.sigs.k8s.io": "custom-weave-20220221084934-6550", "role.minikube.sigs.k8s.io": "" }, "StopSignal": "SIGRTMIN+3" }, "NetworkSettings": { "Bridge": "", "SandboxID": "2e50e0d2e9bb9cbe23d616c3eb71bd84e258ca3dfe1782abff0ee5c5702e7d74", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49369" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49368" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49365" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49367" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49366" } ] }, "SandboxKey": "/var/run/docker/netns/2e50e0d2e9bb", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null, "EndpointID": "", "Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "MacAddress": "", "Networks": { "custom-weave-20220221084934-6550": { "IPAMConfig": { "IPv4Address": "192.168.58.2" }, "Links": null, "Aliases": [ "59cfea5eeecf", "custom-weave-20220221084934-6550" ], "NetworkID": "8f04c0f799cdbf343e84d425f1ca4388cf92aa7825dd26e2443bcb2e6ddf3e18", "EndpointID": "f47747a8e866677e75de509d5ebff9f8d325a45eae331c580281ffef64bb4293", "Gateway": "192.168.58.1", "IPAddress": "192.168.58.2", "IPPrefixLen": 24, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:c0:a8:3a:02", "DriverOpts": null } } } } ] -- /stdout -- helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p custom-weave-20220221084934-6550 -n custom-weave-20220221084934-6550 helpers_test.go:245: <<< TestNetworkPlugins/group/custom-weave FAILED: start of post-mortem logs <<< helpers_test.go:246: ======> post-mortem[TestNetworkPlugins/group/custom-weave]: minikube logs <====== helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 -p custom-weave-20220221084934-6550 logs -n 25 helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p custom-weave-20220221084934-6550 logs -n 25: (1.30489947s) helpers_test.go:253: TestNetworkPlugins/group/custom-weave logs: -- stdout -- * * ==> Audit <== * |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| | start | -p | NoKubernetes-20220221085208-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:06 UTC | Mon, 21 Feb 2022 08:53:13 UTC | | | NoKubernetes-20220221085208-6550 | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | delete | -p | NoKubernetes-20220221085208-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:13 UTC | Mon, 21 Feb 2022 08:53:15 UTC | | | NoKubernetes-20220221085208-6550 | | | | | | | start | -p | kubernetes-upgrade-20220221085141-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:05 UTC | Mon, 21 Feb 2022 08:53:21 UTC | | | kubernetes-upgrade-20220221085141-6550 | | | | | | | | --memory=2200 | | | | | | | | --kubernetes-version=v1.23.5-rc.0 | | | | | | | | --alsologtostderr -v=1 --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | start | -p pause-20220221085158-6550 | pause-20220221085158-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:52:46 UTC | Mon, 21 Feb 2022 08:53:25 UTC | | | --alsologtostderr | | | | | | | | -v=1 --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | pause | -p pause-20220221085158-6550 | pause-20220221085158-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:25 UTC | Mon, 21 Feb 2022 08:53:26 UTC | | | --alsologtostderr -v=5 | | | | | | | unpause | -p pause-20220221085158-6550 | pause-20220221085158-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:27 UTC | Mon, 21 Feb 2022 08:53:28 UTC | | | --alsologtostderr -v=5 | | | | | | | pause | -p pause-20220221085158-6550 | pause-20220221085158-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:28 UTC | Mon, 21 Feb 2022 08:53:29 UTC | | | --alsologtostderr -v=5 | | | | | | | delete | -p | kubernetes-upgrade-20220221085141-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:22 UTC | Mon, 21 Feb 2022 08:53:29 UTC | | | kubernetes-upgrade-20220221085141-6550 | | | | | | | delete | -p pause-20220221085158-6550 | pause-20220221085158-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:29 UTC | Mon, 21 Feb 2022 08:53:32 UTC | | | --alsologtostderr -v=5 | | | | | | | profile | list --output json | minikube | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:32 UTC | Mon, 21 Feb 2022 08:53:32 UTC | | delete | -p pause-20220221085158-6550 | pause-20220221085158-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:33 UTC | Mon, 21 Feb 2022 08:53:33 UTC | | start | -p | stopped-upgrade-20220221085315-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:00 UTC | Mon, 21 Feb 2022 08:54:26 UTC | | | stopped-upgrade-20220221085315-6550 | | | | | | | | --memory=2200 --alsologtostderr | | | | | | | | -v=1 --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | logs | -p | stopped-upgrade-20220221085315-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:27 UTC | Mon, 21 Feb 2022 08:54:29 UTC | | | stopped-upgrade-20220221085315-6550 | | | | | | | delete | -p | stopped-upgrade-20220221085315-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:29 UTC | Mon, 21 Feb 2022 08:54:31 UTC | | | stopped-upgrade-20220221085315-6550 | | | | | | | start | -p | cert-expiration-20220221085105-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:40 UTC | Mon, 21 Feb 2022 08:54:44 UTC | | | cert-expiration-20220221085105-6550 | | | | | | | | --memory=2048 | | | | | | | | --cert-expiration=8760h | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | delete | -p | cert-expiration-20220221085105-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:44 UTC | Mon, 21 Feb 2022 08:54:47 UTC | | | cert-expiration-20220221085105-6550 | | | | | | | start | -p cilium-20220221084934-6550 | cilium-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:33 UTC | Mon, 21 Feb 2022 08:55:10 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=cilium --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p cilium-20220221084934-6550 | cilium-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:55:15 UTC | Mon, 21 Feb 2022 08:55:16 UTC | | | pgrep -a kubelet | | | | | | | delete | -p cilium-20220221084934-6550 | cilium-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:55:29 UTC | Mon, 21 Feb 2022 08:55:33 UTC | | start | -p false-20220221084934-6550 | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:55:33 UTC | Mon, 21 Feb 2022 08:56:15 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=false --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p false-20220221084934-6550 | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:56:15 UTC | Mon, 21 Feb 2022 08:56:16 UTC | | | pgrep -a kubelet | | | | | | | start | -p auto-20220221084933-6550 | auto-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:29 UTC | Mon, 21 Feb 2022 09:01:45 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p auto-20220221084933-6550 | auto-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:01:45 UTC | Mon, 21 Feb 2022 09:01:45 UTC | | | pgrep -a kubelet | | | | | | | -p | false-20220221084934-6550 logs | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:02:41 UTC | Mon, 21 Feb 2022 09:02:42 UTC | | | -n 25 | | | | | | | delete | -p false-20220221084934-6550 | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:02:43 UTC | Mon, 21 Feb 2022 09:02:46 UTC | |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| * * ==> Last Start <== * Log file created at: 2022/02/21 09:02:46 Running on machine: ubuntu-20-agent-5 Binary: Built with gc go1.17.7 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0221 09:02:46.418914 421870 out.go:297] Setting OutFile to fd 1 ... I0221 09:02:46.419151 421870 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 09:02:46.419167 421870 out.go:310] Setting ErrFile to fd 2... I0221 09:02:46.419173 421870 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 09:02:46.419315 421870 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/bin I0221 09:02:46.419744 421870 out.go:304] Setting JSON to false I0221 09:02:46.422139 421870 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2721,"bootTime":1645431446,"procs":586,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"} I0221 09:02:46.422249 421870 start.go:122] virtualization: kvm guest I0221 09:02:46.425907 421870 out.go:176] * [kindnet-20220221084934-6550] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64) I0221 09:02:46.427552 421870 out.go:176] - MINIKUBE_LOCATION=13641 I0221 09:02:46.426088 421870 notify.go:193] Checking for updates... I0221 09:02:46.429105 421870 out.go:176] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true I0221 09:02:46.430539 421870 out.go:176] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 09:02:46.431957 421870 out.go:176] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube I0221 09:02:46.433542 421870 out.go:176] - MINIKUBE_BIN=out/minikube-linux-amd64 I0221 09:02:46.434195 421870 config.go:176] Loaded profile config "auto-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:02:46.434347 421870 config.go:176] Loaded profile config "calico-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:02:46.434466 421870 config.go:176] Loaded profile config "custom-weave-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:02:46.434580 421870 driver.go:344] Setting default libvirt URI to qemu:///system I0221 09:02:46.485736 421870 docker.go:132] docker version: linux-20.10.12 I0221 09:02:46.485848 421870 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 09:02:46.590394 421870 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-02-21 09:02:46.526492405 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 09:02:46.590501 421870 docker.go:237] overlay module found I0221 09:02:46.592885 421870 out.go:176] * Using the docker driver based on user configuration I0221 09:02:46.592913 421870 start.go:281] selected driver: docker I0221 09:02:46.592920 421870 start.go:798] validating driver "docker" against I0221 09:02:46.592941 421870 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} W0221 09:02:46.593002 421870 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. W0221 09:02:46.593034 421870 out.go:241] ! Your cgroup does not allow setting memory. I0221 09:02:46.594359 421870 out.go:176] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities I0221 09:02:46.595176 421870 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 09:02:46.689048 421870 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-02-21 09:02:46.626989751 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 09:02:46.689173 421870 start_flags.go:288] no existing cluster config was found, will generate one from the flags I0221 09:02:46.689337 421870 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m I0221 09:02:46.689374 421870 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] I0221 09:02:46.689398 421870 cni.go:93] Creating CNI manager for "kindnet" I0221 09:02:46.689411 421870 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk" I0221 09:02:46.689420 421870 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk" I0221 09:02:46.689425 421870 start_flags.go:297] Found "CNI" CNI - setting NetworkPlugin=cni I0221 09:02:46.689436 421870 start_flags.go:302] config: {Name:kindnet-20220221084934-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:kindnet-20220221084934-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 09:02:46.691655 421870 out.go:176] * Starting control plane node kindnet-20220221084934-6550 in cluster kindnet-20220221084934-6550 I0221 09:02:46.691689 421870 cache.go:120] Beginning downloading kic base image for docker with docker I0221 09:02:46.693178 421870 out.go:176] * Pulling base image ... I0221 09:02:46.693202 421870 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 09:02:46.693231 421870 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4 I0221 09:02:46.693248 421870 cache.go:57] Caching tarball of preloaded images I0221 09:02:46.693298 421870 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon I0221 09:02:46.693510 421870 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0221 09:02:46.693531 421870 cache.go:60] Finished verifying existence of preloaded tar for v1.23.4 on docker I0221 09:02:46.693663 421870 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/config.json ... I0221 09:02:46.693691 421870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/config.json: {Name:mk5e9f6fabb2503a70e5e3f2016d5064b170a784 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:02:46.739404 421870 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon, skipping pull I0221 09:02:46.739434 421870 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 exists in daemon, skipping load I0221 09:02:46.739450 421870 cache.go:208] Successfully downloaded all kic artifacts I0221 09:02:46.739489 421870 start.go:313] acquiring machines lock for kindnet-20220221084934-6550: {Name:mkae4e55a073d1017bc2176c7236155c21c25592 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:02:46.739642 421870 start.go:317] acquired machines lock for "kindnet-20220221084934-6550" in 125.251µs I0221 09:02:46.739672 421870 start.go:89] Provisioning new machine with config: &{Name:kindnet-20220221084934-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:kindnet-20220221084934-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true} I0221 09:02:46.739741 421870 start.go:126] createHost starting for "" (driver="docker") I0221 09:02:43.499053 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:45.499484 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:46.742792 421870 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ... I0221 09:02:46.742990 421870 start.go:160] libmachine.API.Create for "kindnet-20220221084934-6550" (driver="docker") I0221 09:02:46.743041 421870 client.go:168] LocalClient.Create starting I0221 09:02:46.743109 421870 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem I0221 09:02:46.743139 421870 main.go:130] libmachine: Decoding PEM data... I0221 09:02:46.743155 421870 main.go:130] libmachine: Parsing certificate... I0221 09:02:46.743210 421870 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem I0221 09:02:46.743229 421870 main.go:130] libmachine: Decoding PEM data... I0221 09:02:46.743240 421870 main.go:130] libmachine: Parsing certificate... I0221 09:02:46.743611 421870 cli_runner.go:133] Run: docker network inspect kindnet-20220221084934-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0221 09:02:46.776910 421870 cli_runner.go:180] docker network inspect kindnet-20220221084934-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0221 09:02:46.777036 421870 network_create.go:254] running [docker network inspect kindnet-20220221084934-6550] to gather additional debugging logs... I0221 09:02:46.777068 421870 cli_runner.go:133] Run: docker network inspect kindnet-20220221084934-6550 W0221 09:02:46.810101 421870 cli_runner.go:180] docker network inspect kindnet-20220221084934-6550 returned with exit code 1 I0221 09:02:46.810129 421870 network_create.go:257] error running [docker network inspect kindnet-20220221084934-6550]: docker network inspect kindnet-20220221084934-6550: exit status 1 stdout: [] stderr: Error: No such network: kindnet-20220221084934-6550 I0221 09:02:46.810149 421870 network_create.go:259] output of [docker network inspect kindnet-20220221084934-6550]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: kindnet-20220221084934-6550 ** /stderr ** I0221 09:02:46.810193 421870 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 09:02:46.850325 421870 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0001142e0] misses:0} I0221 09:02:46.850390 421870 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}} I0221 09:02:46.850413 421870 network_create.go:106] attempt to create docker network kindnet-20220221084934-6550 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ... I0221 09:02:46.850466 421870 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220221084934-6550 I0221 09:02:46.941862 421870 network_create.go:90] docker network kindnet-20220221084934-6550 192.168.49.0/24 created I0221 09:02:46.941911 421870 kic.go:106] calculated static IP "192.168.49.2" for the "kindnet-20220221084934-6550" container I0221 09:02:46.941988 421870 cli_runner.go:133] Run: docker ps -a --format {{.Names}} I0221 09:02:46.986866 421870 cli_runner.go:133] Run: docker volume create kindnet-20220221084934-6550 --label name.minikube.sigs.k8s.io=kindnet-20220221084934-6550 --label created_by.minikube.sigs.k8s.io=true I0221 09:02:47.033525 421870 oci.go:102] Successfully created a docker volume kindnet-20220221084934-6550 I0221 09:02:47.033640 421870 cli_runner.go:133] Run: docker run --rm --name kindnet-20220221084934-6550-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220221084934-6550 --entrypoint /usr/bin/test -v kindnet-20220221084934-6550:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -d /var/lib I0221 09:02:47.653123 421870 oci.go:106] Successfully prepared a docker volume kindnet-20220221084934-6550 I0221 09:02:47.653182 421870 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 09:02:47.653206 421870 kic.go:179] Starting extracting preloaded images to volume ... I0221 09:02:47.653278 421870 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20220221084934-6550:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir I0221 09:02:47.499802 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:49.999065 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:51.999352 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:53.357753 421870 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20220221084934-6550:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir: (5.704415115s) I0221 09:02:53.357793 421870 kic.go:188] duration metric: took 5.704584 seconds to extract preloaded images to volume W0221 09:02:53.357839 421870 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted. W0221 09:02:53.357848 421870 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. I0221 09:02:53.357899 421870 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'" I0221 09:02:53.494107 421870 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-20220221084934-6550 --name kindnet-20220221084934-6550 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220221084934-6550 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-20220221084934-6550 --network kindnet-20220221084934-6550 --ip 192.168.49.2 --volume kindnet-20220221084934-6550:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 I0221 09:02:53.938142 421870 cli_runner.go:133] Run: docker container inspect kindnet-20220221084934-6550 --format={{.State.Running}} I0221 09:02:53.977761 421870 cli_runner.go:133] Run: docker container inspect kindnet-20220221084934-6550 --format={{.State.Status}} I0221 09:02:54.014430 421870 cli_runner.go:133] Run: docker exec kindnet-20220221084934-6550 stat /var/lib/dpkg/alternatives/iptables I0221 09:02:54.092969 421870 oci.go:281] the created container "kindnet-20220221084934-6550" has a running status. I0221 09:02:54.093005 421870 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/kindnet-20220221084934-6550/id_rsa... I0221 09:02:54.326177 421870 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/kindnet-20220221084934-6550/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0221 09:02:54.417769 421870 cli_runner.go:133] Run: docker container inspect kindnet-20220221084934-6550 --format={{.State.Status}} I0221 09:02:54.456669 421870 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0221 09:02:54.456690 421870 kic_runner.go:114] Args: [docker exec --privileged kindnet-20220221084934-6550 chown docker:docker /home/docker/.ssh/authorized_keys] I0221 09:02:54.581545 421870 cli_runner.go:133] Run: docker container inspect kindnet-20220221084934-6550 --format={{.State.Status}} I0221 09:02:54.633727 421870 machine.go:88] provisioning docker machine ... I0221 09:02:54.633785 421870 ubuntu.go:169] provisioning hostname "kindnet-20220221084934-6550" I0221 09:02:54.633845 421870 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220221084934-6550 I0221 09:02:54.674073 421870 main.go:130] libmachine: Using SSH client type: native I0221 09:02:54.674260 421870 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49384 } I0221 09:02:54.674279 421870 main.go:130] libmachine: About to run SSH command: sudo hostname kindnet-20220221084934-6550 && echo "kindnet-20220221084934-6550" | sudo tee /etc/hostname I0221 09:02:54.809726 421870 main.go:130] libmachine: SSH cmd err, output: : kindnet-20220221084934-6550 I0221 09:02:54.809848 421870 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220221084934-6550 I0221 09:02:54.851935 421870 main.go:130] libmachine: Using SSH client type: native I0221 09:02:54.852127 421870 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49384 } I0221 09:02:54.852159 421870 main.go:130] libmachine: About to run SSH command: if ! grep -xq '.*\skindnet-20220221084934-6550' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-20220221084934-6550/g' /etc/hosts; else echo '127.0.1.1 kindnet-20220221084934-6550' | sudo tee -a /etc/hosts; fi fi I0221 09:02:54.978885 421870 main.go:130] libmachine: SSH cmd err, output: : I0221 09:02:54.978913 421870 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube} I0221 09:02:54.978931 421870 ubuntu.go:177] setting up certificates I0221 09:02:54.978938 421870 provision.go:83] configureAuth start I0221 09:02:54.978987 421870 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220221084934-6550 I0221 09:02:55.016078 421870 provision.go:138] copyHostCerts I0221 09:02:55.016145 421870 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem, removing ... I0221 09:02:55.016160 421870 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem I0221 09:02:55.016225 421870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem (1123 bytes) I0221 09:02:55.016312 421870 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem, removing ... I0221 09:02:55.016325 421870 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem I0221 09:02:55.016355 421870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem (1675 bytes) I0221 09:02:55.016441 421870 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem, removing ... I0221 09:02:55.016454 421870 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem I0221 09:02:55.016482 421870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem (1082 bytes) I0221 09:02:55.016545 421870 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem org=jenkins.kindnet-20220221084934-6550 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube kindnet-20220221084934-6550] I0221 09:02:55.142876 421870 provision.go:172] copyRemoteCerts I0221 09:02:55.142927 421870 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0221 09:02:55.142956 421870 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220221084934-6550 I0221 09:02:55.180040 421870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/kindnet-20220221084934-6550/id_rsa Username:docker} I0221 09:02:55.271610 421870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes) I0221 09:02:55.290168 421870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes) I0221 09:02:55.309782 421870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I0221 09:02:55.329912 421870 provision.go:86] duration metric: configureAuth took 350.961994ms I0221 09:02:55.329942 421870 ubuntu.go:193] setting minikube options for container-runtime I0221 09:02:55.330124 421870 config.go:176] Loaded profile config "kindnet-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:02:55.330167 421870 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220221084934-6550 I0221 09:02:55.365504 421870 main.go:130] libmachine: Using SSH client type: native I0221 09:02:55.365645 421870 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49384 } I0221 09:02:55.365660 421870 main.go:130] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0221 09:02:55.491213 421870 main.go:130] libmachine: SSH cmd err, output: : overlay I0221 09:02:55.491235 421870 ubuntu.go:71] root file system type: overlay I0221 09:02:55.491415 421870 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0221 09:02:55.491488 421870 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220221084934-6550 I0221 09:02:55.526411 421870 main.go:130] libmachine: Using SSH client type: native I0221 09:02:55.526581 421870 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49384 } I0221 09:02:55.526680 421870 main.go:130] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0221 09:02:55.662369 421870 main.go:130] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0221 09:02:55.662441 421870 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220221084934-6550 I0221 09:02:55.699216 421870 main.go:130] libmachine: Using SSH client type: native I0221 09:02:55.699355 421870 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49384 } I0221 09:02:55.699374 421870 main.go:130] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0221 09:02:54.503567 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:56.998735 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:02:56.491686 421870 main.go:130] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2021-12-13 11:43:42.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2022-02-21 09:02:55.657011187 +0000 @@ -1,30 +1,32 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com +BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target -Requires=docker.socket containerd.service +Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always - -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 +Restart=on-failure -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +34,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0221 09:02:56.491800 421870 machine.go:91] provisioned docker machine in 1.858045045s I0221 09:02:56.491822 421870 client.go:171] LocalClient.Create took 9.74877581s I0221 09:02:56.491872 421870 start.go:168] duration metric: libmachine.API.Create for "kindnet-20220221084934-6550" took 9.748881649s I0221 09:02:56.491889 421870 start.go:267] post-start starting for "kindnet-20220221084934-6550" (driver="docker") I0221 09:02:56.491903 421870 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0221 09:02:56.492012 421870 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0221 09:02:56.492066 421870 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220221084934-6550 I0221 09:02:56.526465 421870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/kindnet-20220221084934-6550/id_rsa Username:docker} I0221 09:02:56.615129 421870 ssh_runner.go:195] Run: cat /etc/os-release I0221 09:02:56.617935 421870 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0221 09:02:56.617957 421870 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0221 09:02:56.617965 421870 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0221 09:02:56.617970 421870 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0221 09:02:56.617978 421870 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/addons for local assets ... I0221 09:02:56.618034 421870 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files for local assets ... I0221 09:02:56.618103 421870 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem -> 65502.pem in /etc/ssl/certs I0221 09:02:56.618171 421870 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs I0221 09:02:56.625275 421870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /etc/ssl/certs/65502.pem (1708 bytes) I0221 09:02:56.644498 421870 start.go:270] post-start completed in 152.588958ms I0221 09:02:56.644916 421870 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220221084934-6550 I0221 09:02:56.681293 421870 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/config.json ... I0221 09:02:56.681515 421870 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0221 09:02:56.681610 421870 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220221084934-6550 I0221 09:02:56.721379 421870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/kindnet-20220221084934-6550/id_rsa Username:docker} I0221 09:02:56.808966 421870 start.go:129] duration metric: createHost completed in 10.069211525s I0221 09:02:56.808998 421870 start.go:80] releasing machines lock for "kindnet-20220221084934-6550", held for 10.069338497s I0221 09:02:56.809095 421870 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220221084934-6550 I0221 09:02:56.850076 421870 ssh_runner.go:195] Run: systemctl --version I0221 09:02:56.850132 421870 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220221084934-6550 I0221 09:02:56.850167 421870 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/ I0221 09:02:56.850237 421870 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220221084934-6550 I0221 09:02:56.890827 421870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/kindnet-20220221084934-6550/id_rsa Username:docker} I0221 09:02:56.891336 421870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/kindnet-20220221084934-6550/id_rsa Username:docker} I0221 09:02:56.976243 421870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0221 09:02:57.132743 421870 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 09:02:57.148381 421870 cruntime.go:272] skipping containerd shutdown because we are bound to it I0221 09:02:57.148442 421870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0221 09:02:57.159381 421870 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0221 09:02:57.175410 421870 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0221 09:02:57.271305 421870 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0221 09:02:57.355294 421870 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 09:02:57.367402 421870 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0221 09:02:57.488213 421870 ssh_runner.go:195] Run: sudo systemctl start docker I0221 09:02:57.498236 421870 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 09:02:57.539420 421870 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 09:02:57.582488 421870 out.go:203] * Preparing Kubernetes v1.23.4 on Docker 20.10.12 ... I0221 09:02:57.582558 421870 cli_runner.go:133] Run: docker network inspect kindnet-20220221084934-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 09:02:57.617583 421870 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts I0221 09:02:57.621298 421870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 09:02:57.633091 421870 out.go:176] - kubelet.housekeeping-interval=5m I0221 09:02:57.634593 421870 out.go:176] - kubelet.cni-conf-dir=/etc/cni/net.mk I0221 09:02:57.634664 421870 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 09:02:57.634717 421870 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0221 09:02:57.669633 421870 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.4 k8s.gcr.io/kube-proxy:v1.23.4 k8s.gcr.io/kube-scheduler:v1.23.4 k8s.gcr.io/kube-controller-manager:v1.23.4 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0221 09:02:57.669660 421870 docker.go:537] Images already preloaded, skipping extraction I0221 09:02:57.669718 421870 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0221 09:02:57.702534 421870 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.4 k8s.gcr.io/kube-proxy:v1.23.4 k8s.gcr.io/kube-scheduler:v1.23.4 k8s.gcr.io/kube-controller-manager:v1.23.4 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0221 09:02:57.702557 421870 cache_images.go:84] Images are preloaded, skipping loading I0221 09:02:57.702614 421870 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0221 09:02:57.801443 421870 cni.go:93] Creating CNI manager for "kindnet" I0221 09:02:57.801477 421870 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0221 09:02:57.801495 421870 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-20220221084934-6550 NodeName:kindnet-20220221084934-6550 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0221 09:02:57.802095 421870 kubeadm.go:162] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "kindnet-20220221084934-6550" kubeletExtraArgs: node-ip: 192.168.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.23.4 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0221 09:02:57.802216 421870 kubeadm.go:936] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.23.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kindnet-20220221084934-6550 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 [Install] config: {KubernetesVersion:v1.23.4 ClusterName:kindnet-20220221084934-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} I0221 09:02:57.802277 421870 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.4 I0221 09:02:57.812194 421870 binaries.go:44] Found k8s binaries, skipping transfer I0221 09:02:57.812264 421870 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0221 09:02:57.820437 421870 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes) I0221 09:02:57.835878 421870 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0221 09:02:57.851132 421870 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2049 bytes) I0221 09:02:57.866717 421870 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0221 09:02:57.870186 421870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 09:02:57.880930 421870 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550 for IP: 192.168.49.2 I0221 09:02:57.881073 421870 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key I0221 09:02:57.881123 421870 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key I0221 09:02:57.881182 421870 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.key I0221 09:02:57.881201 421870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt with IP's: [] I0221 09:02:58.122192 421870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt ... I0221 09:02:58.122248 421870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: {Name:mkfcad536857e2df5f764473a6c4022c78e2cb6b Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:02:58.122520 421870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.key ... I0221 09:02:58.122555 421870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.key: {Name:mk0a0ed6833930623faa4187b4c5b9df5d813c22 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:02:58.122712 421870 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/apiserver.key.dd3b5fb2 I0221 09:02:58.122738 421870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1] I0221 09:02:58.294361 421870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/apiserver.crt.dd3b5fb2 ... I0221 09:02:58.294395 421870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/apiserver.crt.dd3b5fb2: {Name:mk57f05819422b53c694b7dbd0538167943b8123 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:02:58.294585 421870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/apiserver.key.dd3b5fb2 ... I0221 09:02:58.294598 421870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/apiserver.key.dd3b5fb2: {Name:mkace58def46b0ded866dbe122dff06b32df1c31 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:02:58.294688 421870 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/apiserver.crt I0221 09:02:58.294748 421870 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/apiserver.key I0221 09:02:58.294794 421870 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/proxy-client.key I0221 09:02:58.294807 421870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/proxy-client.crt with IP's: [] I0221 09:02:58.386352 421870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/proxy-client.crt ... I0221 09:02:58.386394 421870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/proxy-client.crt: {Name:mk1e5b13534fa30b464e2af4b13ee0434adbb152 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:02:58.386580 421870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/proxy-client.key ... I0221 09:02:58.386595 421870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/proxy-client.key: {Name:mkccae812985e46ee45b7ed63a4e8f01e4ef79bc Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:02:58.386808 421870 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem (1338 bytes) W0221 09:02:58.386847 421870 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550_empty.pem, impossibly tiny 0 bytes I0221 09:02:58.386860 421870 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem (1679 bytes) I0221 09:02:58.386879 421870 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem (1082 bytes) I0221 09:02:58.386904 421870 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem (1123 bytes) I0221 09:02:58.386930 421870 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem (1675 bytes) I0221 09:02:58.386967 421870 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem (1708 bytes) I0221 09:02:58.387842 421870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0221 09:02:58.406406 421870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0221 09:02:58.427877 421870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0221 09:02:58.450370 421870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I0221 09:02:58.471031 421870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0221 09:02:58.488588 421870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0221 09:02:58.510325 421870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0221 09:02:58.531870 421870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0221 09:02:58.552589 421870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /usr/share/ca-certificates/65502.pem (1708 bytes) I0221 09:02:58.572723 421870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0221 09:02:58.592243 421870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem --> /usr/share/ca-certificates/6550.pem (1338 bytes) I0221 09:02:58.612569 421870 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0221 09:02:58.627629 421870 ssh_runner.go:195] Run: openssl version I0221 09:02:58.632470 421870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65502.pem && ln -fs /usr/share/ca-certificates/65502.pem /etc/ssl/certs/65502.pem" I0221 09:02:58.640718 421870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65502.pem I0221 09:02:58.643932 421870 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Feb 21 08:30 /usr/share/ca-certificates/65502.pem I0221 09:02:58.643984 421870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65502.pem I0221 09:02:58.649626 421870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65502.pem /etc/ssl/certs/3ec20f2e.0" I0221 09:02:58.659813 421870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0221 09:02:58.668032 421870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0221 09:02:58.671395 421870 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb 21 08:26 /usr/share/ca-certificates/minikubeCA.pem I0221 09:02:58.671466 421870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0221 09:02:58.676558 421870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0221 09:02:58.686078 421870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6550.pem && ln -fs /usr/share/ca-certificates/6550.pem /etc/ssl/certs/6550.pem" I0221 09:02:58.695102 421870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6550.pem I0221 09:02:58.698227 421870 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Feb 21 08:30 /usr/share/ca-certificates/6550.pem I0221 09:02:58.698283 421870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6550.pem I0221 09:02:58.703261 421870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6550.pem /etc/ssl/certs/51391683.0" I0221 09:02:58.711034 421870 kubeadm.go:391] StartCluster: {Name:kindnet-20220221084934-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:kindnet-20220221084934-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 09:02:58.711184 421870 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0221 09:02:58.743141 421870 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0221 09:02:58.750761 421870 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0221 09:02:58.758046 421870 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver I0221 09:02:58.758093 421870 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0221 09:02:58.765348 421870 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0221 09:02:58.765385 421870 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0221 09:02:59.415593 421870 out.go:203] - Generating certificates and keys ... I0221 09:02:57.356331 223679 system_pods.go:86] 9 kube-system pods found I0221 09:02:57.356379 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 09:02:57.356394 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 09:02:57.356411 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:02:57.356420 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 09:02:57.356428 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 09:02:57.356435 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 09:02:57.356448 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 09:02:57.356454 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 09:02:57.356467 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:02:57.356486 223679 retry.go:31] will retry after 47.463338706s: missing components: kube-dns I0221 09:02:58.999291 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:00.999500 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:02.244326 421870 out.go:203] - Booting up control plane ... I0221 09:03:03.001366 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:05.498670 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:10.293083 421870 out.go:203] - Configuring RBAC rules ... I0221 09:03:10.709558 421870 cni.go:93] Creating CNI manager for "kindnet" I0221 09:03:10.711610 421870 out.go:176] * Configuring CNI (Container Networking Interface) ... I0221 09:03:10.711695 421870 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap I0221 09:03:10.716222 421870 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.4/kubectl ... I0221 09:03:10.716244 421870 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes) I0221 09:03:10.732451 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml I0221 09:03:07.499251 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:09.998225 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:11.999084 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:11.923993 421870 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.191495967s) I0221 09:03:11.924059 421870 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0221 09:03:11.924159 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:11.924167 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=9a4c39b14752448ee8b5da7f8bb397c2a16d9ea9 minikube.k8s.io/name=kindnet-20220221084934-6550 minikube.k8s.io/updated_at=2022_02_21T09_03_11_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:12.032677 421870 ops.go:34] apiserver oom_adj: -16 I0221 09:03:12.032780 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:12.605285 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:13.105004 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:13.605923 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:14.105476 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:14.605851 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:15.105728 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:15.605955 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:16.105032 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:14.499690 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:16.998485 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:16.605672 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:17.105080 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:17.605790 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:18.106006 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:18.605297 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:19.105722 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:19.605419 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:20.105021 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:20.605093 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:21.105621 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:19.498295 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:21.498521 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:21.605168 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:22.105314 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:22.605614 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:23.105172 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:23.605755 421870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:03:23.737549 421870 kubeadm.go:1020] duration metric: took 11.81345574s to wait for elevateKubeSystemPrivileges. I0221 09:03:23.737585 421870 kubeadm.go:393] StartCluster complete in 25.026601823s I0221 09:03:23.737605 421870 settings.go:142] acquiring lock: {Name:mk4400923ef35d7d80e21aa000bc7683aef0fb09 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:03:23.737698 421870 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 09:03:23.739843 421870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig: {Name:mkbaf7b3f4ffbe8a9d57707d423380523fa909e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:03:24.259930 421870 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kindnet-20220221084934-6550" rescaled to 1 I0221 09:03:24.260011 421870 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true} I0221 09:03:24.260042 421870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0221 09:03:24.262604 421870 out.go:176] * Verifying Kubernetes components... I0221 09:03:24.260238 421870 addons.go:415] enableAddons start: toEnable=map[], additional=[] I0221 09:03:24.262730 421870 addons.go:65] Setting storage-provisioner=true in profile "kindnet-20220221084934-6550" I0221 09:03:24.262765 421870 addons.go:153] Setting addon storage-provisioner=true in "kindnet-20220221084934-6550" W0221 09:03:24.262773 421870 addons.go:165] addon storage-provisioner should already be in state true I0221 09:03:24.260430 421870 config.go:176] Loaded profile config "kindnet-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:03:24.262809 421870 host.go:66] Checking if "kindnet-20220221084934-6550" exists ... I0221 09:03:24.262817 421870 addons.go:65] Setting default-storageclass=true in profile "kindnet-20220221084934-6550" I0221 09:03:24.262671 421870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0221 09:03:24.262845 421870 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-20220221084934-6550" I0221 09:03:24.263204 421870 cli_runner.go:133] Run: docker container inspect kindnet-20220221084934-6550 --format={{.State.Status}} I0221 09:03:24.263349 421870 cli_runner.go:133] Run: docker container inspect kindnet-20220221084934-6550 --format={{.State.Status}} I0221 09:03:24.307855 421870 out.go:176] - Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0221 09:03:24.307554 421870 addons.go:153] Setting addon default-storageclass=true in "kindnet-20220221084934-6550" W0221 09:03:24.307952 421870 addons.go:165] addon default-storageclass should already be in state true I0221 09:03:24.307975 421870 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml I0221 09:03:24.307981 421870 host.go:66] Checking if "kindnet-20220221084934-6550" exists ... I0221 09:03:24.307985 421870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0221 09:03:24.308036 421870 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220221084934-6550 I0221 09:03:24.308426 421870 cli_runner.go:133] Run: docker container inspect kindnet-20220221084934-6550 --format={{.State.Status}} I0221 09:03:24.346731 421870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0221 09:03:24.349502 421870 node_ready.go:35] waiting up to 5m0s for node "kindnet-20220221084934-6550" to be "Ready" ... I0221 09:03:24.369657 421870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/kindnet-20220221084934-6550/id_rsa Username:docker} I0221 09:03:24.369909 421870 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml I0221 09:03:24.369923 421870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0221 09:03:24.369985 421870 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220221084934-6550 I0221 09:03:24.417801 421870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/kindnet-20220221084934-6550/id_rsa Username:docker} I0221 09:03:24.513471 421870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0221 09:03:24.522931 421870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0221 09:03:24.602695 421870 start.go:777] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS I0221 09:03:24.810287 421870 out.go:176] * Enabled addons: storage-provisioner, default-storageclass I0221 09:03:24.810310 421870 addons.go:417] enableAddons completed in 550.081796ms I0221 09:03:26.357757 421870 node_ready.go:58] node "kindnet-20220221084934-6550" has status "Ready":"False" I0221 09:03:23.499957 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:25.998718 227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False" I0221 09:03:26.503352 227869 pod_ready.go:81] duration metric: took 4m0.410759109s waiting for pod "weave-net-dgkzh" in "kube-system" namespace to be "Ready" ... E0221 09:03:26.503375 227869 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition I0221 09:03:26.503381 227869 pod_ready.go:38] duration metric: took 8m1.440836229s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 09:03:26.503404 227869 api_server.go:51] waiting for apiserver process to appear ... I0221 09:03:26.505928 227869 out.go:176] W0221 09:03:26.506107 227869 out.go:241] X Exiting due to K8S_APISERVER_MISSING: wait 5m0s for node: wait for apiserver proc: apiserver process never appeared W0221 09:03:26.506213 227869 out.go:241] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled W0221 09:03:26.506230 227869 out.go:241] * Related issues: W0221 09:03:26.506275 227869 out.go:241] - https://github.com/kubernetes/minikube/issues/4536 W0221 09:03:26.506318 227869 out.go:241] - https://github.com/kubernetes/minikube/issues/6014 * * ==> Docker <== * -- Logs begin at Mon 2022-02-21 08:54:55 UTC, end at Mon 2022-02-21 09:03:27 UTC. -- Feb 21 09:02:14 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:02:14.697367580Z" level=info msg="ignoring event" container=88ce05954468ab57698064df19cf814c5ede1ec4eda27856f100378261b791f4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:02:17 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:02:17.672194107Z" level=info msg="ignoring event" container=252e0bd2b3c27c7ffd30a4ca63fed9b0d2f1690abe0113a3cb903e33da27acb5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:02:20 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:02:20.864605647Z" level=info msg="ignoring event" container=737ed589748b3a80ba14cd9553476955a9d4a2ab6192db2aeae03bf1dd75d9b2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:02:23 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:02:23.331864889Z" level=info msg="ignoring event" container=e6a494216854192163347189af4fefab83f8a968046164a80dd1cd46b19ab14c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:02:25 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:02:25.884977063Z" level=info msg="ignoring event" container=811e0c6b7450a8178c1d5b10099f8c9d11b74e185a5501f60dacdc8738c3abd1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:02:29 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:02:29.054733485Z" level=info msg="ignoring event" container=15dce3c4013039e752d3e075d8a3e2a64a7a9e7d05a442eff6d467dfeff4b8ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:02:31 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:02:31.809103204Z" level=info msg="ignoring event" container=c5ecf5a037334fd5a00bd55c6db7f11781a556e1d9de94aee6381e1cf698bd46 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:02:34 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:02:34.872639462Z" level=info msg="ignoring event" container=a5cb2f7b49a0751b3de6dc9ec2932815786dc892007094d62319698b30a8152a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:02:38 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:02:38.064951004Z" level=info msg="ignoring event" container=24fa7de1fd8e102016ef8b0ed78d538891c402212ea8790fd304e7f62f49ef27 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:02:40 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:02:40.569723358Z" level=info msg="ignoring event" container=282e8b8957134fd2222ecb0e1bc665e6567fea1576d36c09c217b3670260eb08 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:02:42 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:02:42.904622303Z" level=info msg="ignoring event" container=dba636f77f8b3865a153b5f9eab718078e2e3a542eeff5549b5eb0ddf5a7a132 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:02:45 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:02:45.922787256Z" level=info msg="ignoring event" container=8f15cce7254b970e81801c968279312efcf21fbbbd5116d6b4a04cc5ec89f7a1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:02:49 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:02:49.297901218Z" level=info msg="ignoring event" container=f69bd826c4b20f09ae642f28d49e95246da6a7a2b73468e78ba3b4b490dba308 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:02:55 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:02:55.059571054Z" level=info msg="ignoring event" container=af47f6a40b5dff44f2c94228f44b8340813ccea7507a5fed92f4755a84029496 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:02:58 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:02:58.109356523Z" level=info msg="ignoring event" container=796295c4646377148ffd3aa593767ade48b73e56a3c96b7f27f15178bc7fb107 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:01 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:03:01.372423917Z" level=info msg="ignoring event" container=1f053a4c5730668c77e0ca0ad5c264c0f70994a52ce07ecbf5aedb22a54c72ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:03 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:03:03.956957575Z" level=info msg="ignoring event" container=46e5973497db31d649e643cb75a26b01252543cdd4fc8bbf99289586091c27a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:06 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:03:06.557797427Z" level=info msg="ignoring event" container=1aa26186412e915010c101980740e3591abd87c8fb665ce06ebbe089860ccd65 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:09 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:03:09.276258472Z" level=info msg="ignoring event" container=99957ac19e4b1e4a8691fa770f14ae400ee0dbd763658da2036c119b3d3f1f67 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:12 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:03:12.157275454Z" level=info msg="ignoring event" container=25ade4adf1d675cfd74595935a4a73a35836ac781e002460f21ba05679b754bb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:14 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:03:14.641343436Z" level=info msg="ignoring event" container=d579e5bc4d63858758e923a26b9e4162c525ed3920c38b3cac0c6dbd1168db6a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:17 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:03:17.283661320Z" level=info msg="ignoring event" container=09ca64fdcc403616320dad9883db51d84abc802dc6d5ca64ab5114f486e96873 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:19 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:03:19.460879613Z" level=info msg="ignoring event" container=881f77bfde6c7ec9da43ad1ecd02b5d722a5e1d2337cf707a09fa8072bbebc34 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:22 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:03:22.643034928Z" level=info msg="ignoring event" container=008864fcc48472d474c2e17424b20325fd7f5f5105d22d53f0b9a64690ef6523 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:25 custom-weave-20220221084934-6550 dockerd[458]: time="2022-02-21T09:03:25.647622603Z" level=info msg="ignoring event" container=fd5e26f3c4e0a622405aade59195b54dd258c3624064b92ee473676038f8d1c0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 482d2370d9581 e9dd2f85e51b4 2 minutes ago Exited weave 5 4e9007562a877 9e817ce47282a 6e38f40d628db 2 minutes ago Exited storage-provisioner 5 6eb8806cfc7a2 5b965740dd4ad weaveworks/weave-npc@sha256:0f6166e000faa500ccc0df53caae17edd3110590b7b159007a5ea727cdfb1cef 7 minutes ago Running weave-npc 0 4e9007562a877 0cb891515343e 2114245ec4d6b 8 minutes ago Running kube-proxy 0 15ac6f927e0ae a014e0a91eccb 62930710c9634 8 minutes ago Running kube-apiserver 0 770b587b6be71 b59c9c533c60c aceacb6244f9f 8 minutes ago Running kube-scheduler 0 f9d0fcb630265 6039583378dbe 25f8c7f3da61c 8 minutes ago Running etcd 0 56ca1829f5b89 93b77eb808339 25444908517a5 8 minutes ago Running kube-controller-manager 0 c35f5c04ef1df * * ==> describe nodes <== * Name: custom-weave-20220221084934-6550 Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=custom-weave-20220221084934-6550 kubernetes.io/os=linux minikube.k8s.io/commit=9a4c39b14752448ee8b5da7f8bb397c2a16d9ea9 minikube.k8s.io/name=custom-weave-20220221084934-6550 minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2022_02_21T08_55_12_0700 minikube.k8s.io/version=v1.25.1 node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 21 Feb 2022 08:55:08 +0000 Taints: Unschedulable: false Lease: HolderIdentity: custom-weave-20220221084934-6550 AcquireTime: RenewTime: Mon, 21 Feb 2022 09:03:21 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Mon, 21 Feb 2022 09:00:47 +0000 Mon, 21 Feb 2022 08:55:05 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Mon, 21 Feb 2022 09:00:47 +0000 Mon, 21 Feb 2022 08:55:05 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Mon, 21 Feb 2022 09:00:47 +0000 Mon, 21 Feb 2022 08:55:05 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Mon, 21 Feb 2022 09:00:47 +0000 Mon, 21 Feb 2022 08:55:21 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.58.2 Hostname: custom-weave-20220221084934-6550 Capacity: cpu: 8 ephemeral-storage: 304695084Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32874648Ki pods: 110 Allocatable: cpu: 8 ephemeral-storage: 304695084Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32874648Ki pods: 110 System Info: Machine ID: b6a262faae404a5db719705fd34b5c8b System UUID: d8899eaa-a145-497e-bd02-b1e6b9bda954 Boot ID: 36f9c729-2a96-4807-bb74-314dc2113999 Kernel Version: 5.11.0-1029-gcp OS Image: Ubuntu 20.04.2 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.12 Kubelet Version: v1.23.4 Kube-Proxy Version: v1.23.4 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (8 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- kube-system coredns-64897985d-fw5hd 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 8m3s kube-system etcd-custom-weave-20220221084934-6550 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 8m16s kube-system kube-apiserver-custom-weave-20220221084934-6550 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m16s kube-system kube-controller-manager-custom-weave-20220221084934-6550 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m16s kube-system kube-proxy-q4stn 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m3s kube-system kube-scheduler-custom-weave-20220221084934-6550 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m16s kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m1s kube-system weave-net-dgkzh 20m (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m3s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 770m (9%!)(MISSING) 0 (0%!)(MISSING) memory 170Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 8m2s kube-proxy Normal NodeHasSufficientMemory 8m23s (x4 over 8m23s) kubelet Node custom-weave-20220221084934-6550 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 8m23s (x4 over 8m23s) kubelet Node custom-weave-20220221084934-6550 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 8m23s (x4 over 8m23s) kubelet Node custom-weave-20220221084934-6550 status is now: NodeHasSufficientPID Normal NodeHasSufficientMemory 8m16s kubelet Node custom-weave-20220221084934-6550 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 8m16s kubelet Node custom-weave-20220221084934-6550 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 8m16s kubelet Node custom-weave-20220221084934-6550 status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 8m16s kubelet Updated Node Allocatable limit across pods Normal Starting 8m16s kubelet Starting kubelet. Normal NodeReady 8m6s kubelet Node custom-weave-20220221084934-6550 status is now: NodeReady * * ==> dmesg <== * [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 46 c9 e8 63 60 1b 08 06 [ +5.838269] IPv4: martian source 10.85.0.156 from 10.85.0.156, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff ee 44 32 6b 48 e8 08 06 [ +3.065442] IPv4: martian source 10.85.0.157 from 10.85.0.157, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 4a 26 81 0f 06 4a 08 06 [Feb21 09:03] IPv4: martian source 10.85.0.158 from 10.85.0.158, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff da 80 7d 07 f0 ca 08 06 [ +2.561210] IPv4: martian source 10.85.0.159 from 10.85.0.159, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 23 e1 c4 83 2c 08 06 [ +2.615653] IPv4: martian source 10.85.0.160 from 10.85.0.160, on dev eth0 [ +0.000005] ll header: 00000000: ff ff ff ff ff ff 8e 64 41 7f 5e 31 08 06 [ +2.733452] IPv4: martian source 10.85.0.161 from 10.85.0.161, on dev eth0 [ +0.000008] ll header: 00000000: ff ff ff ff ff ff da fc d1 c9 f2 2a 08 06 [ +2.883194] IPv4: martian source 10.85.0.162 from 10.85.0.162, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 1a 5e d5 29 ea a8 08 06 [ +2.455339] IPv4: martian source 10.85.0.163 from 10.85.0.163, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff 9e 50 c8 60 43 de 08 06 [ +2.674144] IPv4: martian source 10.85.0.164 from 10.85.0.164, on dev eth0 [ +0.000008] ll header: 00000000: ff ff ff ff ff ff ae b8 d8 5c 06 86 08 06 [ +2.173451] IPv4: martian source 10.85.0.165 from 10.85.0.165, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff b6 23 71 a2 17 13 08 06 [ +3.191430] IPv4: martian source 10.85.0.166 from 10.85.0.166, on dev eth0 [ +0.000008] ll header: 00000000: ff ff ff ff ff ff fa ee 02 4a fe dc 08 06 [ +3.010319] IPv4: martian source 10.85.0.167 from 10.85.0.167, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff be 1f 49 7a 27 ae 08 06 * * ==> etcd [6039583378db] <== * {"level":"info","ts":"2022-02-21T08:55:05.914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"} {"level":"info","ts":"2022-02-21T08:55:05.914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"} {"level":"info","ts":"2022-02-21T08:55:05.914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"} {"level":"info","ts":"2022-02-21T08:55:05.914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"} {"level":"info","ts":"2022-02-21T08:55:05.915Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T08:55:05.916Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T08:55:05.916Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T08:55:05.916Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T08:55:05.916Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:custom-weave-20220221084934-6550 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"} {"level":"info","ts":"2022-02-21T08:55:05.916Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2022-02-21T08:55:05.917Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2022-02-21T08:55:05.919Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"} {"level":"info","ts":"2022-02-21T08:55:05.919Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"} {"level":"info","ts":"2022-02-21T08:55:05.919Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"} {"level":"info","ts":"2022-02-21T08:55:05.920Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"} {"level":"warn","ts":"2022-02-21T08:55:22.981Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"113.272633ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/bootstrap-signer\" ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-02-21T08:55:22.982Z","caller":"traceutil/trace.go:171","msg":"trace[870101966] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/bootstrap-signer; range_end:; response_count:0; response_revision:371; }","duration":"113.405634ms","start":"2022-02-21T08:55:22.868Z","end":"2022-02-21T08:55:22.981Z","steps":["trace[870101966] 'range keys from in-memory index tree' (duration: 113.190449ms)"],"step_count":1} {"level":"warn","ts":"2022-02-21T08:55:40.058Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"167.151839ms","expected-duration":"100ms","prefix":"","request":"header: txn: success:> failure: >>","response":"size:16"} {"level":"info","ts":"2022-02-21T08:55:40.058Z","caller":"traceutil/trace.go:171","msg":"trace[1744569559] linearizableReadLoop","detail":"{readStateIndex:513; appliedIndex:512; }","duration":"178.043871ms","start":"2022-02-21T08:55:39.880Z","end":"2022-02-21T08:55:40.058Z","steps":["trace[1744569559] 'read index received' (duration: 10.359817ms)","trace[1744569559] 'applied index is now lower than readState.Index' (duration: 167.683229ms)"],"step_count":2} {"level":"warn","ts":"2022-02-21T08:55:40.059Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"178.172127ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-02-21T08:55:40.059Z","caller":"traceutil/trace.go:171","msg":"trace[1187793531] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:497; }","duration":"178.228302ms","start":"2022-02-21T08:55:39.880Z","end":"2022-02-21T08:55:40.059Z","steps":["trace[1187793531] 'agreement among raft nodes before linearized reading' (duration: 178.118124ms)"],"step_count":1} {"level":"info","ts":"2022-02-21T08:55:40.059Z","caller":"traceutil/trace.go:171","msg":"trace[1378877095] transaction","detail":"{read_only:false; response_revision:497; number_of_response:1; }","duration":"331.875147ms","start":"2022-02-21T08:55:39.727Z","end":"2022-02-21T08:55:40.059Z","steps":["trace[1378877095] 'process raft request' (duration: 164.196319ms)","trace[1378877095] 'compare' (duration: 167.060655ms)"],"step_count":2} {"level":"warn","ts":"2022-02-21T08:55:40.059Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-02-21T08:55:39.727Z","time spent":"332.125837ms","remote":"127.0.0.1:51898","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":39,"request content":"compare: success:> failure: >"} {"level":"warn","ts":"2022-02-21T09:02:51.241Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"246.5049ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/weave-net-dgkzh\" ","response":"range_response_count:1 size:6950"} {"level":"info","ts":"2022-02-21T09:02:51.241Z","caller":"traceutil/trace.go:171","msg":"trace[2034250247] range","detail":"{range_begin:/registry/pods/kube-system/weave-net-dgkzh; range_end:; response_count:1; response_revision:682; }","duration":"246.625029ms","start":"2022-02-21T09:02:50.994Z","end":"2022-02-21T09:02:51.241Z","steps":["trace[2034250247] 'range keys from in-memory index tree' (duration: 246.347077ms)"],"step_count":1} * * ==> kernel <== * 09:03:28 up 46 min, 0 users, load average: 4.47, 4.47, 3.60 Linux custom-weave-20220221084934-6550 5.11.0-1029-gcp #33~20.04.3-Ubuntu SMP Tue Jan 18 12:03:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.2 LTS" * * ==> kube-apiserver [a014e0a91ecc] <== * I0221 08:55:08.402954 1 apf_controller.go:322] Running API Priority and Fairness config worker I0221 08:55:08.403014 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0221 08:55:08.403138 1 shared_informer.go:247] Caches are synced for crd-autoregister I0221 08:55:08.403155 1 cache.go:39] Caches are synced for autoregister controller I0221 08:55:08.403411 1 cache.go:39] Caches are synced for AvailableConditionController controller I0221 08:55:08.407626 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller I0221 08:55:09.202333 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0221 08:55:09.202360 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0221 08:55:09.211893 1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000 I0221 08:55:09.214735 1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000 I0221 08:55:09.214752 1 storage_scheduling.go:109] all system priority classes are created successfully or already exist. I0221 08:55:09.609634 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0221 08:55:09.641913 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0221 08:55:09.729729 1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1] W0221 08:55:09.734626 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2] I0221 08:55:09.735614 1 controller.go:611] quota admission added evaluator for: endpoints I0221 08:55:09.739395 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io I0221 08:55:10.417856 1 controller.go:611] quota admission added evaluator for: serviceaccounts I0221 08:55:10.963831 1 controller.go:611] quota admission added evaluator for: deployments.apps I0221 08:55:10.972339 1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10] I0221 08:55:10.982337 1 controller.go:611] quota admission added evaluator for: daemonsets.apps I0221 08:55:11.205312 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io I0221 08:55:24.024100 1 controller.go:611] quota admission added evaluator for: replicasets.apps I0221 08:55:24.173602 1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps I0221 08:55:25.214967 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io * * ==> kube-controller-manager [93b77eb80833] <== * I0221 08:55:23.408050 1 node_lifecycle_controller.go:1213] Controller detected that zone is now in state Normal. I0221 08:55:23.408060 1 event.go:294] "Event occurred" object="custom-weave-20220221084934-6550" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node custom-weave-20220221084934-6550 event: Registered Node custom-weave-20220221084934-6550 in Controller" I0221 08:55:23.431888 1 shared_informer.go:247] Caches are synced for TTL I0221 08:55:23.450654 1 shared_informer.go:247] Caches are synced for endpoint_slice I0221 08:55:23.470910 1 shared_informer.go:247] Caches are synced for GC I0221 08:55:23.473094 1 shared_informer.go:247] Caches are synced for persistent volume I0221 08:55:23.480469 1 shared_informer.go:247] Caches are synced for resource quota I0221 08:55:23.482934 1 shared_informer.go:247] Caches are synced for node I0221 08:55:23.482968 1 range_allocator.go:173] Starting range CIDR allocator I0221 08:55:23.482973 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator I0221 08:55:23.482981 1 shared_informer.go:247] Caches are synced for cidrallocator I0221 08:55:23.490340 1 range_allocator.go:374] Set node custom-weave-20220221084934-6550 PodCIDR to [10.244.0.0/24] I0221 08:55:23.521725 1 shared_informer.go:247] Caches are synced for stateful set I0221 08:55:23.523687 1 shared_informer.go:247] Caches are synced for resource quota I0221 08:55:23.526912 1 shared_informer.go:247] Caches are synced for daemon sets I0221 08:55:23.898780 1 shared_informer.go:247] Caches are synced for garbage collector I0221 08:55:23.898816 1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0221 08:55:23.903955 1 shared_informer.go:247] Caches are synced for garbage collector I0221 08:55:24.026313 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2" I0221 08:55:24.181386 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-q4stn" I0221 08:55:24.183297 1 event.go:294] "Event occurred" object="kube-system/weave-net" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: weave-net-dgkzh" I0221 08:55:24.276312 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-kn627" I0221 08:55:24.280740 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-fw5hd" I0221 08:55:24.550870 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1" I0221 08:55:24.556696 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-kn627" * * ==> kube-proxy [0cb891515343] <== * I0221 08:55:25.130891 1 node.go:163] Successfully retrieved node IP: 192.168.58.2 I0221 08:55:25.130972 1 server_others.go:138] "Detected node IP" address="192.168.58.2" I0221 08:55:25.131023 1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode="" I0221 08:55:25.207154 1 server_others.go:206] "Using iptables Proxier" I0221 08:55:25.207194 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4 I0221 08:55:25.207207 1 server_others.go:214] "Creating dualStackProxier for iptables" I0221 08:55:25.207249 1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6" I0221 08:55:25.207630 1 server.go:656] "Version info" version="v1.23.4" I0221 08:55:25.212832 1 config.go:317] "Starting service config controller" I0221 08:55:25.213026 1 shared_informer.go:240] Waiting for caches to sync for service config I0221 08:55:25.212946 1 config.go:226] "Starting endpoint slice config controller" I0221 08:55:25.213063 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I0221 08:55:25.313289 1 shared_informer.go:247] Caches are synced for endpoint slice config I0221 08:55:25.313423 1 shared_informer.go:247] Caches are synced for service config * * ==> kube-scheduler [b59c9c533c60] <== * E0221 08:55:08.322642 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0221 08:55:08.322648 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0221 08:55:08.322466 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0221 08:55:08.322767 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0221 08:55:08.322780 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0221 08:55:08.322781 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope W0221 08:55:08.322946 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0221 08:55:08.322984 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0221 08:55:08.323112 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0221 08:55:08.323155 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0221 08:55:08.323175 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0221 08:55:08.323206 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0221 08:55:09.194703 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0221 08:55:09.194743 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0221 08:55:09.197585 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0221 08:55:09.197608 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0221 08:55:09.208788 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0221 08:55:09.208822 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0221 08:55:09.269483 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0221 08:55:09.269512 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0221 08:55:09.303358 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0221 08:55:09.303386 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0221 08:55:09.349092 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0221 08:55:09.349130 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope I0221 08:55:09.720186 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Mon 2022-02-21 08:54:55 UTC, end at Mon 2022-02-21 09:03:28 UTC. -- Feb 21 09:03:20 custom-weave-20220221084934-6550 kubelet[1937]: I0221 09:03:20.406887 1937 scope.go:110] "RemoveContainer" containerID="482d2370d9581598d5f4c8efcbda364af379d0ee5707ba13f454e267732c045b" Feb 21 09:03:20 custom-weave-20220221084934-6550 kubelet[1937]: E0221 09:03:20.407453 1937 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"weave\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=weave pod=weave-net-dgkzh_kube-system(ba48aae4-721f-4a19-a470-782f7c69d914)\"" pod="kube-system/weave-net-dgkzh" podUID=ba48aae4-721f-4a19-a470-782f7c69d914 Feb 21 09:03:20 custom-weave-20220221084934-6550 kubelet[1937]: I0221 09:03:20.415726 1937 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"coredns-64897985d-fw5hd_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"881f77bfde6c7ec9da43ad1ecd02b5d722a5e1d2337cf707a09fa8072bbebc34\"" Feb 21 09:03:20 custom-weave-20220221084934-6550 kubelet[1937]: I0221 09:03:20.418239 1937 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="881f77bfde6c7ec9da43ad1ecd02b5d722a5e1d2337cf707a09fa8072bbebc34" Feb 21 09:03:20 custom-weave-20220221084934-6550 kubelet[1937]: I0221 09:03:20.419838 1937 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"881f77bfde6c7ec9da43ad1ecd02b5d722a5e1d2337cf707a09fa8072bbebc34\"" Feb 21 09:03:22 custom-weave-20220221084934-6550 kubelet[1937]: E0221 09:03:22.546109 1937 cni.go:362] "Error adding pod to network" err="failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-64897985d-fw5hd" podSandboxID={Type:docker ID:008864fcc48472d474c2e17424b20325fd7f5f5105d22d53f0b9a64690ef6523} podNetnsPath="/proc/28845/ns/net" networkType="bridge" networkName="crio" Feb 21 09:03:22 custom-weave-20220221084934-6550 kubelet[1937]: E0221 09:03:22.610368 1937 cni.go:381] "Error deleting pod from network" err="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.166 -j CNI-553e67149ecc707c6384d5f7 -m comment --comment name: \"crio\" id: \"008864fcc48472d474c2e17424b20325fd7f5f5105d22d53f0b9a64690ef6523\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-553e67149ecc707c6384d5f7':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" pod="kube-system/coredns-64897985d-fw5hd" podSandboxID={Type:docker ID:008864fcc48472d474c2e17424b20325fd7f5f5105d22d53f0b9a64690ef6523} podNetnsPath="/proc/28845/ns/net" networkType="bridge" networkName="crio" Feb 21 09:03:22 custom-weave-20220221084934-6550 kubelet[1937]: E0221 09:03:22.663482 1937 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"008864fcc48472d474c2e17424b20325fd7f5f5105d22d53f0b9a64690ef6523\" network for pod \"coredns-64897985d-fw5hd\": networkPlugin cni failed to set up pod \"coredns-64897985d-fw5hd_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"008864fcc48472d474c2e17424b20325fd7f5f5105d22d53f0b9a64690ef6523\" network for pod \"coredns-64897985d-fw5hd\": networkPlugin cni failed to teardown pod \"coredns-64897985d-fw5hd_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.166 -j CNI-553e67149ecc707c6384d5f7 -m comment --comment name: \"crio\" id: \"008864fcc48472d474c2e17424b20325fd7f5f5105d22d53f0b9a64690ef6523\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-553e67149ecc707c6384d5f7':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" Feb 21 09:03:22 custom-weave-20220221084934-6550 kubelet[1937]: E0221 09:03:22.663574 1937 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"008864fcc48472d474c2e17424b20325fd7f5f5105d22d53f0b9a64690ef6523\" network for pod \"coredns-64897985d-fw5hd\": networkPlugin cni failed to set up pod \"coredns-64897985d-fw5hd_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"008864fcc48472d474c2e17424b20325fd7f5f5105d22d53f0b9a64690ef6523\" network for pod \"coredns-64897985d-fw5hd\": networkPlugin cni failed to teardown pod \"coredns-64897985d-fw5hd_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.166 -j CNI-553e67149ecc707c6384d5f7 -m comment --comment name: \"crio\" id: \"008864fcc48472d474c2e17424b20325fd7f5f5105d22d53f0b9a64690ef6523\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-553e67149ecc707c6384d5f7':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/coredns-64897985d-fw5hd" Feb 21 09:03:22 custom-weave-20220221084934-6550 kubelet[1937]: E0221 09:03:22.663623 1937 kuberuntime_manager.go:832] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"008864fcc48472d474c2e17424b20325fd7f5f5105d22d53f0b9a64690ef6523\" network for pod \"coredns-64897985d-fw5hd\": networkPlugin cni failed to set up pod \"coredns-64897985d-fw5hd_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"008864fcc48472d474c2e17424b20325fd7f5f5105d22d53f0b9a64690ef6523\" network for pod \"coredns-64897985d-fw5hd\": networkPlugin cni failed to teardown pod \"coredns-64897985d-fw5hd_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.166 -j CNI-553e67149ecc707c6384d5f7 -m comment --comment name: \"crio\" id: \"008864fcc48472d474c2e17424b20325fd7f5f5105d22d53f0b9a64690ef6523\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-553e67149ecc707c6384d5f7':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/coredns-64897985d-fw5hd" Feb 21 09:03:22 custom-weave-20220221084934-6550 kubelet[1937]: E0221 09:03:22.663699 1937 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-64897985d-fw5hd_kube-system(442952fb-cceb-4c88-88d9-f45c8b015e1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-64897985d-fw5hd_kube-system(442952fb-cceb-4c88-88d9-f45c8b015e1a)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"008864fcc48472d474c2e17424b20325fd7f5f5105d22d53f0b9a64690ef6523\\\" network for pod \\\"coredns-64897985d-fw5hd\\\": networkPlugin cni failed to set up pod \\\"coredns-64897985d-fw5hd_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"008864fcc48472d474c2e17424b20325fd7f5f5105d22d53f0b9a64690ef6523\\\" network for pod \\\"coredns-64897985d-fw5hd\\\": networkPlugin cni failed to teardown pod \\\"coredns-64897985d-fw5hd_kube-system\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.166 -j CNI-553e67149ecc707c6384d5f7 -m comment --comment name: \\\"crio\\\" id: \\\"008864fcc48472d474c2e17424b20325fd7f5f5105d22d53f0b9a64690ef6523\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-553e67149ecc707c6384d5f7':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/coredns-64897985d-fw5hd" podUID=442952fb-cceb-4c88-88d9-f45c8b015e1a Feb 21 09:03:23 custom-weave-20220221084934-6550 kubelet[1937]: I0221 09:03:23.446331 1937 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"coredns-64897985d-fw5hd_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"008864fcc48472d474c2e17424b20325fd7f5f5105d22d53f0b9a64690ef6523\"" Feb 21 09:03:23 custom-weave-20220221084934-6550 kubelet[1937]: I0221 09:03:23.449628 1937 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="008864fcc48472d474c2e17424b20325fd7f5f5105d22d53f0b9a64690ef6523" Feb 21 09:03:23 custom-weave-20220221084934-6550 kubelet[1937]: I0221 09:03:23.451184 1937 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"008864fcc48472d474c2e17424b20325fd7f5f5105d22d53f0b9a64690ef6523\"" Feb 21 09:03:24 custom-weave-20220221084934-6550 kubelet[1937]: I0221 09:03:24.407514 1937 scope.go:110] "RemoveContainer" containerID="9e817ce47282a2823395a8362af2a42e4cfda5432c37521da922d4379ecc1571" Feb 21 09:03:24 custom-weave-20220221084934-6550 kubelet[1937]: E0221 09:03:24.407806 1937 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(07cca2bf-78dc-4768-b83e-be6bd78df3a2)\"" pod="kube-system/storage-provisioner" podUID=07cca2bf-78dc-4768-b83e-be6bd78df3a2 Feb 21 09:03:25 custom-weave-20220221084934-6550 kubelet[1937]: E0221 09:03:25.551793 1937 cni.go:362] "Error adding pod to network" err="failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-64897985d-fw5hd" podSandboxID={Type:docker ID:fd5e26f3c4e0a622405aade59195b54dd258c3624064b92ee473676038f8d1c0} podNetnsPath="/proc/29008/ns/net" networkType="bridge" networkName="crio" Feb 21 09:03:25 custom-weave-20220221084934-6550 kubelet[1937]: E0221 09:03:25.618613 1937 cni.go:381] "Error deleting pod from network" err="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.167 -j CNI-bfea267026914d8f6d28005f -m comment --comment name: \"crio\" id: \"fd5e26f3c4e0a622405aade59195b54dd258c3624064b92ee473676038f8d1c0\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-bfea267026914d8f6d28005f':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" pod="kube-system/coredns-64897985d-fw5hd" podSandboxID={Type:docker ID:fd5e26f3c4e0a622405aade59195b54dd258c3624064b92ee473676038f8d1c0} podNetnsPath="/proc/29008/ns/net" networkType="bridge" networkName="crio" Feb 21 09:03:25 custom-weave-20220221084934-6550 kubelet[1937]: E0221 09:03:25.663706 1937 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"fd5e26f3c4e0a622405aade59195b54dd258c3624064b92ee473676038f8d1c0\" network for pod \"coredns-64897985d-fw5hd\": networkPlugin cni failed to set up pod \"coredns-64897985d-fw5hd_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"fd5e26f3c4e0a622405aade59195b54dd258c3624064b92ee473676038f8d1c0\" network for pod \"coredns-64897985d-fw5hd\": networkPlugin cni failed to teardown pod \"coredns-64897985d-fw5hd_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.167 -j CNI-bfea267026914d8f6d28005f -m comment --comment name: \"crio\" id: \"fd5e26f3c4e0a622405aade59195b54dd258c3624064b92ee473676038f8d1c0\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-bfea267026914d8f6d28005f':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" Feb 21 09:03:25 custom-weave-20220221084934-6550 kubelet[1937]: E0221 09:03:25.663774 1937 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"fd5e26f3c4e0a622405aade59195b54dd258c3624064b92ee473676038f8d1c0\" network for pod \"coredns-64897985d-fw5hd\": networkPlugin cni failed to set up pod \"coredns-64897985d-fw5hd_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"fd5e26f3c4e0a622405aade59195b54dd258c3624064b92ee473676038f8d1c0\" network for pod \"coredns-64897985d-fw5hd\": networkPlugin cni failed to teardown pod \"coredns-64897985d-fw5hd_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.167 -j CNI-bfea267026914d8f6d28005f -m comment --comment name: \"crio\" id: \"fd5e26f3c4e0a622405aade59195b54dd258c3624064b92ee473676038f8d1c0\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-bfea267026914d8f6d28005f':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/coredns-64897985d-fw5hd" Feb 21 09:03:25 custom-weave-20220221084934-6550 kubelet[1937]: E0221 09:03:25.663802 1937 kuberuntime_manager.go:832] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"fd5e26f3c4e0a622405aade59195b54dd258c3624064b92ee473676038f8d1c0\" network for pod \"coredns-64897985d-fw5hd\": networkPlugin cni failed to set up pod \"coredns-64897985d-fw5hd_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"fd5e26f3c4e0a622405aade59195b54dd258c3624064b92ee473676038f8d1c0\" network for pod \"coredns-64897985d-fw5hd\": networkPlugin cni failed to teardown pod \"coredns-64897985d-fw5hd_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.167 -j CNI-bfea267026914d8f6d28005f -m comment --comment name: \"crio\" id: \"fd5e26f3c4e0a622405aade59195b54dd258c3624064b92ee473676038f8d1c0\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-bfea267026914d8f6d28005f':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/coredns-64897985d-fw5hd" Feb 21 09:03:25 custom-weave-20220221084934-6550 kubelet[1937]: E0221 09:03:25.663871 1937 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-64897985d-fw5hd_kube-system(442952fb-cceb-4c88-88d9-f45c8b015e1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-64897985d-fw5hd_kube-system(442952fb-cceb-4c88-88d9-f45c8b015e1a)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"fd5e26f3c4e0a622405aade59195b54dd258c3624064b92ee473676038f8d1c0\\\" network for pod \\\"coredns-64897985d-fw5hd\\\": networkPlugin cni failed to set up pod \\\"coredns-64897985d-fw5hd_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"fd5e26f3c4e0a622405aade59195b54dd258c3624064b92ee473676038f8d1c0\\\" network for pod \\\"coredns-64897985d-fw5hd\\\": networkPlugin cni failed to teardown pod \\\"coredns-64897985d-fw5hd_kube-system\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.167 -j CNI-bfea267026914d8f6d28005f -m comment --comment name: \\\"crio\\\" id: \\\"fd5e26f3c4e0a622405aade59195b54dd258c3624064b92ee473676038f8d1c0\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-bfea267026914d8f6d28005f':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/coredns-64897985d-fw5hd" podUID=442952fb-cceb-4c88-88d9-f45c8b015e1a Feb 21 09:03:26 custom-weave-20220221084934-6550 kubelet[1937]: I0221 09:03:26.483358 1937 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"coredns-64897985d-fw5hd_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"fd5e26f3c4e0a622405aade59195b54dd258c3624064b92ee473676038f8d1c0\"" Feb 21 09:03:26 custom-weave-20220221084934-6550 kubelet[1937]: I0221 09:03:26.487110 1937 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="fd5e26f3c4e0a622405aade59195b54dd258c3624064b92ee473676038f8d1c0" Feb 21 09:03:26 custom-weave-20220221084934-6550 kubelet[1937]: I0221 09:03:26.488542 1937 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"fd5e26f3c4e0a622405aade59195b54dd258c3624064b92ee473676038f8d1c0\"" * * ==> storage-provisioner [9e817ce47282] <== * I0221 09:00:55.543058 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... F0221 09:01:25.546638 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout -- /stdout -- helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p custom-weave-20220221084934-6550 -n custom-weave-20220221084934-6550 helpers_test.go:262: (dbg) Run: kubectl --context custom-weave-20220221084934-6550 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running helpers_test.go:271: non-running pods: coredns-64897985d-fw5hd helpers_test.go:273: ======> post-mortem[TestNetworkPlugins/group/custom-weave]: describe non-running pods <====== helpers_test.go:276: (dbg) Run: kubectl --context custom-weave-20220221084934-6550 describe pod coredns-64897985d-fw5hd E0221 09:03:29.174010 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory helpers_test.go:276: (dbg) Non-zero exit: kubectl --context custom-weave-20220221084934-6550 describe pod coredns-64897985d-fw5hd: exit status 1 (66.307701ms) ** stderr ** Error from server (NotFound): pods "coredns-64897985d-fw5hd" not found ** /stderr ** helpers_test.go:278: kubectl --context custom-weave-20220221084934-6550 describe pod coredns-64897985d-fw5hd: exit status 1 helpers_test.go:176: Cleaning up "custom-weave-20220221084934-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p custom-weave-20220221084934-6550 helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p custom-weave-20220221084934-6550: (2.860073013s) --- FAIL: TestNetworkPlugins/group/custom-weave (524.66s) === FAIL: . TestNetworkPlugins/group/calico/Start (553.27s) net_test.go:99: (dbg) Run: out/minikube-linux-amd64 start -p calico-20220221084934-6550 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker --container-runtime=docker E0221 08:54:33.149049 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory net_test.go:99: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-20220221084934-6550 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker --container-runtime=docker: exit status 80 (9m13.225436451s) -- stdout -- * [calico-20220221084934-6550] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64) - MINIKUBE_LOCATION=13641 - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube - MINIKUBE_BIN=out/minikube-linux-amd64 * Using the docker driver based on user configuration - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities * Starting control plane node calico-20220221084934-6550 in cluster calico-20220221084934-6550 * Pulling base image ... * Creating docker container (CPUs=2, Memory=2048MB) ... * Preparing Kubernetes v1.23.4 on Docker 20.10.12 ... - kubelet.housekeeping-interval=5m - Generating certificates and keys ... - Booting up control plane ... - Configuring RBAC rules ... * Configuring Calico (Container Networking Interface) ... * Verifying Kubernetes components... - Using image gcr.io/k8s-minikube/storage-provisioner:v5 * Enabled addons: default-storageclass, storage-provisioner -- /stdout -- ** stderr ** I0221 08:54:31.669336 223679 out.go:297] Setting OutFile to fd 1 ... I0221 08:54:31.669431 223679 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 08:54:31.669456 223679 out.go:310] Setting ErrFile to fd 2... I0221 08:54:31.669459 223679 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 08:54:31.669575 223679 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/bin I0221 08:54:31.669863 223679 out.go:304] Setting JSON to false I0221 08:54:31.671533 223679 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2226,"bootTime":1645431446,"procs":815,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"} I0221 08:54:31.671604 223679 start.go:122] virtualization: kvm guest I0221 08:54:31.674304 223679 out.go:176] * [calico-20220221084934-6550] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64) I0221 08:54:31.675747 223679 out.go:176] - MINIKUBE_LOCATION=13641 I0221 08:54:31.674505 223679 notify.go:193] Checking for updates... I0221 08:54:31.677072 223679 out.go:176] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true I0221 08:54:31.678381 223679 out.go:176] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 08:54:31.679665 223679 out.go:176] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube I0221 08:54:31.680895 223679 out.go:176] - MINIKUBE_BIN=out/minikube-linux-amd64 I0221 08:54:31.681490 223679 config.go:176] Loaded profile config "auto-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 08:54:31.681597 223679 config.go:176] Loaded profile config "cert-expiration-20220221085105-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 08:54:31.681682 223679 config.go:176] Loaded profile config "cilium-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 08:54:31.681731 223679 driver.go:344] Setting default libvirt URI to qemu:///system I0221 08:54:31.726270 223679 docker.go:132] docker version: linux-20.10.12 I0221 08:54:31.726387 223679 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 08:54:31.828014 223679 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2022-02-21 08:54:31.757670791 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 08:54:31.828153 223679 docker.go:237] overlay module found I0221 08:54:31.830095 223679 out.go:176] * Using the docker driver based on user configuration I0221 08:54:31.830122 223679 start.go:281] selected driver: docker I0221 08:54:31.830127 223679 start.go:798] validating driver "docker" against I0221 08:54:31.830150 223679 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} W0221 08:54:31.830216 223679 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. W0221 08:54:31.830236 223679 out.go:241] ! Your cgroup does not allow setting memory. ! Your cgroup does not allow setting memory. I0221 08:54:31.831700 223679 out.go:176] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities I0221 08:54:31.832312 223679 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 08:54:31.933660 223679 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2022-02-21 08:54:31.865164378 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 08:54:31.933812 223679 start_flags.go:288] no existing cluster config was found, will generate one from the flags I0221 08:54:31.933956 223679 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m I0221 08:54:31.933978 223679 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] I0221 08:54:31.933991 223679 cni.go:93] Creating CNI manager for "calico" I0221 08:54:31.934000 223679 start_flags.go:297] Found "Calico" CNI - setting NetworkPlugin=cni I0221 08:54:31.934009 223679 start_flags.go:302] config: {Name:calico-20220221084934-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:calico-20220221084934-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 08:54:31.936655 223679 out.go:176] * Starting control plane node calico-20220221084934-6550 in cluster calico-20220221084934-6550 I0221 08:54:31.936718 223679 cache.go:120] Beginning downloading kic base image for docker with docker I0221 08:54:31.938119 223679 out.go:176] * Pulling base image ... I0221 08:54:31.938156 223679 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 08:54:31.938186 223679 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4 I0221 08:54:31.938198 223679 cache.go:57] Caching tarball of preloaded images I0221 08:54:31.938250 223679 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon I0221 08:54:31.938441 223679 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0221 08:54:31.938462 223679 cache.go:60] Finished verifying existence of preloaded tar for v1.23.4 on docker I0221 08:54:31.938612 223679 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/config.json ... I0221 08:54:31.938638 223679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/config.json: {Name:mk6dfec3eeded4259016eef6692333e08748c03e Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:54:32.001614 223679 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon, skipping pull I0221 08:54:32.001646 223679 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 exists in daemon, skipping load I0221 08:54:32.001665 223679 cache.go:208] Successfully downloaded all kic artifacts I0221 08:54:32.001710 223679 start.go:313] acquiring machines lock for calico-20220221084934-6550: {Name:mk9bd20451a3b8275874174c12a3c8e8fcabb93f Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 08:54:32.001861 223679 start.go:317] acquired machines lock for "calico-20220221084934-6550" in 125.883µs I0221 08:54:32.001895 223679 start.go:89] Provisioning new machine with config: &{Name:calico-20220221084934-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:calico-20220221084934-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true} I0221 08:54:32.002014 223679 start.go:126] createHost starting for "" (driver="docker") I0221 08:54:32.004421 223679 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ... I0221 08:54:32.004718 223679 start.go:160] libmachine.API.Create for "calico-20220221084934-6550" (driver="docker") I0221 08:54:32.004755 223679 client.go:168] LocalClient.Create starting I0221 08:54:32.004831 223679 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem I0221 08:54:32.004868 223679 main.go:130] libmachine: Decoding PEM data... I0221 08:54:32.004896 223679 main.go:130] libmachine: Parsing certificate... I0221 08:54:32.004981 223679 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem I0221 08:54:32.005006 223679 main.go:130] libmachine: Decoding PEM data... I0221 08:54:32.005024 223679 main.go:130] libmachine: Parsing certificate... I0221 08:54:32.005451 223679 cli_runner.go:133] Run: docker network inspect calico-20220221084934-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0221 08:54:32.041628 223679 cli_runner.go:180] docker network inspect calico-20220221084934-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0221 08:54:32.041708 223679 network_create.go:254] running [docker network inspect calico-20220221084934-6550] to gather additional debugging logs... I0221 08:54:32.041731 223679 cli_runner.go:133] Run: docker network inspect calico-20220221084934-6550 W0221 08:54:32.081587 223679 cli_runner.go:180] docker network inspect calico-20220221084934-6550 returned with exit code 1 I0221 08:54:32.081619 223679 network_create.go:257] error running [docker network inspect calico-20220221084934-6550]: docker network inspect calico-20220221084934-6550: exit status 1 stdout: [] stderr: Error: No such network: calico-20220221084934-6550 I0221 08:54:32.081656 223679 network_create.go:259] output of [docker network inspect calico-20220221084934-6550]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: calico-20220221084934-6550 ** /stderr ** I0221 08:54:32.081716 223679 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 08:54:32.120427 223679 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-8af72e223855 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:58:a5:dd:c8}} I0221 08:54:32.121233 223679 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-3becfb688ac0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:ae:26:de:33}} I0221 08:54:32.122028 223679 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc000618270] misses:0} I0221 08:54:32.122088 223679 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}} I0221 08:54:32.122116 223679 network_create.go:106] attempt to create docker network calico-20220221084934-6550 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ... I0221 08:54:32.122177 223679 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220221084934-6550 I0221 08:54:32.217845 223679 network_create.go:90] docker network calico-20220221084934-6550 192.168.67.0/24 created I0221 08:54:32.217884 223679 kic.go:106] calculated static IP "192.168.67.2" for the "calico-20220221084934-6550" container I0221 08:54:32.217960 223679 cli_runner.go:133] Run: docker ps -a --format {{.Names}} I0221 08:54:32.260460 223679 cli_runner.go:133] Run: docker volume create calico-20220221084934-6550 --label name.minikube.sigs.k8s.io=calico-20220221084934-6550 --label created_by.minikube.sigs.k8s.io=true I0221 08:54:32.294046 223679 oci.go:102] Successfully created a docker volume calico-20220221084934-6550 I0221 08:54:32.294150 223679 cli_runner.go:133] Run: docker run --rm --name calico-20220221084934-6550-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220221084934-6550 --entrypoint /usr/bin/test -v calico-20220221084934-6550:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -d /var/lib I0221 08:54:32.998319 223679 oci.go:106] Successfully prepared a docker volume calico-20220221084934-6550 I0221 08:54:32.998383 223679 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 08:54:32.998411 223679 kic.go:179] Starting extracting preloaded images to volume ... I0221 08:54:32.998566 223679 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220221084934-6550:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir I0221 08:54:39.205880 223679 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220221084934-6550:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir: (6.207231146s) I0221 08:54:39.205919 223679 kic.go:188] duration metric: took 6.207506 seconds to extract preloaded images to volume W0221 08:54:39.205955 223679 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted. W0221 08:54:39.205964 223679 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. I0221 08:54:39.206012 223679 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'" I0221 08:54:39.302203 223679 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220221084934-6550 --name calico-20220221084934-6550 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220221084934-6550 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220221084934-6550 --network calico-20220221084934-6550 --ip 192.168.67.2 --volume calico-20220221084934-6550:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 I0221 08:54:39.751892 223679 cli_runner.go:133] Run: docker container inspect calico-20220221084934-6550 --format={{.State.Running}} I0221 08:54:39.788728 223679 cli_runner.go:133] Run: docker container inspect calico-20220221084934-6550 --format={{.State.Status}} I0221 08:54:39.827631 223679 cli_runner.go:133] Run: docker exec calico-20220221084934-6550 stat /var/lib/dpkg/alternatives/iptables I0221 08:54:39.899385 223679 oci.go:281] the created container "calico-20220221084934-6550" has a running status. I0221 08:54:39.899415 223679 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/calico-20220221084934-6550/id_rsa... I0221 08:54:40.325976 223679 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/calico-20220221084934-6550/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0221 08:54:40.437286 223679 cli_runner.go:133] Run: docker container inspect calico-20220221084934-6550 --format={{.State.Status}} I0221 08:54:40.476120 223679 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0221 08:54:40.476145 223679 kic_runner.go:114] Args: [docker exec --privileged calico-20220221084934-6550 chown docker:docker /home/docker/.ssh/authorized_keys] I0221 08:54:40.568825 223679 cli_runner.go:133] Run: docker container inspect calico-20220221084934-6550 --format={{.State.Status}} I0221 08:54:40.605419 223679 machine.go:88] provisioning docker machine ... I0221 08:54:40.605466 223679 ubuntu.go:169] provisioning hostname "calico-20220221084934-6550" I0221 08:54:40.605522 223679 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220221084934-6550 I0221 08:54:40.645726 223679 main.go:130] libmachine: Using SSH client type: native I0221 08:54:40.645994 223679 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49364 } I0221 08:54:40.646023 223679 main.go:130] libmachine: About to run SSH command: sudo hostname calico-20220221084934-6550 && echo "calico-20220221084934-6550" | sudo tee /etc/hostname I0221 08:54:40.780620 223679 main.go:130] libmachine: SSH cmd err, output: : calico-20220221084934-6550 I0221 08:54:40.780691 223679 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220221084934-6550 I0221 08:54:40.814209 223679 main.go:130] libmachine: Using SSH client type: native I0221 08:54:40.814413 223679 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49364 } I0221 08:54:40.814449 223679 main.go:130] libmachine: About to run SSH command: if ! grep -xq '.*\scalico-20220221084934-6550' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20220221084934-6550/g' /etc/hosts; else echo '127.0.1.1 calico-20220221084934-6550' | sudo tee -a /etc/hosts; fi fi I0221 08:54:40.938947 223679 main.go:130] libmachine: SSH cmd err, output: : I0221 08:54:40.938980 223679 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube} I0221 08:54:40.939035 223679 ubuntu.go:177] setting up certificates I0221 08:54:40.939046 223679 provision.go:83] configureAuth start I0221 08:54:40.939089 223679 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220221084934-6550 I0221 08:54:40.975796 223679 provision.go:138] copyHostCerts I0221 08:54:40.975850 223679 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem, removing ... I0221 08:54:40.975857 223679 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem I0221 08:54:40.975903 223679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem (1082 bytes) I0221 08:54:40.975970 223679 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem, removing ... I0221 08:54:40.975988 223679 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem I0221 08:54:40.976005 223679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem (1123 bytes) I0221 08:54:40.976063 223679 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem, removing ... I0221 08:54:40.976102 223679 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem I0221 08:54:40.976121 223679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem (1675 bytes) I0221 08:54:40.976166 223679 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem org=jenkins.calico-20220221084934-6550 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20220221084934-6550] I0221 08:54:41.313676 223679 provision.go:172] copyRemoteCerts I0221 08:54:41.313739 223679 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0221 08:54:41.313767 223679 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220221084934-6550 I0221 08:54:41.349452 223679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49364 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/calico-20220221084934-6550/id_rsa Username:docker} I0221 08:54:41.438412 223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes) I0221 08:54:41.457832 223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes) I0221 08:54:41.476216 223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I0221 08:54:41.495583 223679 provision.go:86] duration metric: configureAuth took 556.525196ms I0221 08:54:41.495616 223679 ubuntu.go:193] setting minikube options for container-runtime I0221 08:54:41.495815 223679 config.go:176] Loaded profile config "calico-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 08:54:41.495870 223679 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220221084934-6550 I0221 08:54:41.533059 223679 main.go:130] libmachine: Using SSH client type: native I0221 08:54:41.533198 223679 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49364 } I0221 08:54:41.533213 223679 main.go:130] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0221 08:54:41.655048 223679 main.go:130] libmachine: SSH cmd err, output: : overlay I0221 08:54:41.655077 223679 ubuntu.go:71] root file system type: overlay I0221 08:54:41.655267 223679 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0221 08:54:41.655327 223679 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220221084934-6550 I0221 08:54:41.689366 223679 main.go:130] libmachine: Using SSH client type: native I0221 08:54:41.689505 223679 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49364 } I0221 08:54:41.689565 223679 main.go:130] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %s "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0221 08:54:41.822029 223679 main.go:130] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0221 08:54:41.822112 223679 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220221084934-6550 I0221 08:54:41.859291 223679 main.go:130] libmachine: Using SSH client type: native I0221 08:54:41.859435 223679 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49364 } I0221 08:54:41.859452 223679 main.go:130] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0221 08:54:42.534877 223679 main.go:130] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2021-12-13 11:43:42.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2022-02-21 08:54:41.817826590 +0000 @@ -1,30 +1,32 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com +BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target -Requires=docker.socket containerd.service +Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always - -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 +Restart=on-failure -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +34,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0221 08:54:42.534914 223679 machine.go:91] provisioned docker machine in 1.929466074s I0221 08:54:42.534924 223679 client.go:171] LocalClient.Create took 10.53016081s I0221 08:54:42.534936 223679 start.go:168] duration metric: libmachine.API.Create for "calico-20220221084934-6550" took 10.530218344s I0221 08:54:42.534945 223679 start.go:267] post-start starting for "calico-20220221084934-6550" (driver="docker") I0221 08:54:42.534950 223679 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0221 08:54:42.535085 223679 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0221 08:54:42.535124 223679 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220221084934-6550 I0221 08:54:42.570227 223679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49364 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/calico-20220221084934-6550/id_rsa Username:docker} I0221 08:54:42.659420 223679 ssh_runner.go:195] Run: cat /etc/os-release I0221 08:54:42.662549 223679 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0221 08:54:42.662589 223679 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0221 08:54:42.662602 223679 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0221 08:54:42.662610 223679 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0221 08:54:42.662627 223679 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/addons for local assets ... I0221 08:54:42.662691 223679 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files for local assets ... I0221 08:54:42.662786 223679 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem -> 65502.pem in /etc/ssl/certs I0221 08:54:42.662899 223679 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs I0221 08:54:42.670331 223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /etc/ssl/certs/65502.pem (1708 bytes) I0221 08:54:42.689477 223679 start.go:270] post-start completed in 154.520884ms I0221 08:54:42.689843 223679 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220221084934-6550 I0221 08:54:42.730023 223679 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/config.json ... I0221 08:54:42.730315 223679 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0221 08:54:42.730369 223679 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220221084934-6550 I0221 08:54:42.767727 223679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49364 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/calico-20220221084934-6550/id_rsa Username:docker} I0221 08:54:42.851528 223679 start.go:129] duration metric: createHost completed in 10.849499789s I0221 08:54:42.851567 223679 start.go:80] releasing machines lock for "calico-20220221084934-6550", held for 10.849686754s I0221 08:54:42.851656 223679 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220221084934-6550 I0221 08:54:42.893166 223679 ssh_runner.go:195] Run: systemctl --version I0221 08:54:42.893224 223679 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/ I0221 08:54:42.893229 223679 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220221084934-6550 I0221 08:54:42.893280 223679 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220221084934-6550 I0221 08:54:42.935097 223679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49364 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/calico-20220221084934-6550/id_rsa Username:docker} I0221 08:54:42.939437 223679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49364 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/calico-20220221084934-6550/id_rsa Username:docker} I0221 08:54:43.165553 223679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0221 08:54:43.176428 223679 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 08:54:43.186305 223679 cruntime.go:272] skipping containerd shutdown because we are bound to it I0221 08:54:43.186358 223679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0221 08:54:43.196307 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0221 08:54:43.209884 223679 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0221 08:54:43.297602 223679 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0221 08:54:43.367679 223679 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 08:54:43.377417 223679 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0221 08:54:43.457703 223679 ssh_runner.go:195] Run: sudo systemctl start docker I0221 08:54:43.467810 223679 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 08:54:43.509287 223679 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 08:54:43.551952 223679 out.go:203] * Preparing Kubernetes v1.23.4 on Docker 20.10.12 ... I0221 08:54:43.552042 223679 cli_runner.go:133] Run: docker network inspect calico-20220221084934-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 08:54:43.590101 223679 ssh_runner.go:195] Run: grep 192.168.67.1 host.minikube.internal$ /etc/hosts I0221 08:54:43.593455 223679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 08:54:43.604974 223679 out.go:176] - kubelet.housekeeping-interval=5m I0221 08:54:43.605063 223679 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 08:54:43.605146 223679 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0221 08:54:43.639090 223679 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.4 k8s.gcr.io/kube-proxy:v1.23.4 k8s.gcr.io/kube-controller-manager:v1.23.4 k8s.gcr.io/kube-scheduler:v1.23.4 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0221 08:54:43.639119 223679 docker.go:537] Images already preloaded, skipping extraction I0221 08:54:43.639171 223679 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0221 08:54:43.676921 223679 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.4 k8s.gcr.io/kube-proxy:v1.23.4 k8s.gcr.io/kube-controller-manager:v1.23.4 k8s.gcr.io/kube-scheduler:v1.23.4 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0221 08:54:43.676951 223679 cache_images.go:84] Images are preloaded, skipping loading I0221 08:54:43.677005 223679 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0221 08:54:43.775624 223679 cni.go:93] Creating CNI manager for "calico" I0221 08:54:43.775650 223679 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0221 08:54:43.775662 223679 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20220221084934-6550 NodeName:calico-20220221084934-6550 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0221 08:54:43.775783 223679 kubeadm.go:162] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.67.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "calico-20220221084934-6550" kubeletExtraArgs: node-ip: 192.168.67.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.67.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.23.4 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%" nodefs.inodesFree: "0%" imagefs.available: "0%" failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0221 08:54:43.775860 223679 kubeadm.go:936] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.23.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=calico-20220221084934-6550 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 [Install] config: {KubernetesVersion:v1.23.4 ClusterName:calico-20220221084934-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} I0221 08:54:43.775903 223679 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.4 I0221 08:54:43.783049 223679 binaries.go:44] Found k8s binaries, skipping transfer I0221 08:54:43.783112 223679 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0221 08:54:43.790080 223679 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (400 bytes) I0221 08:54:43.803657 223679 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0221 08:54:43.817305 223679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes) I0221 08:54:43.832073 223679 ssh_runner.go:195] Run: grep 192.168.67.2 control-plane.minikube.internal$ /etc/hosts I0221 08:54:43.835308 223679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 08:54:43.845202 223679 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550 for IP: 192.168.67.2 I0221 08:54:43.845320 223679 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key I0221 08:54:43.845374 223679 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key I0221 08:54:43.845436 223679 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/client.key I0221 08:54:43.845456 223679 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/client.crt with IP's: [] I0221 08:54:44.006432 223679 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/client.crt ... I0221 08:54:44.006474 223679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/client.crt: {Name:mk855fbba0271a5174ba2c17a62536f5fc002b45 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:54:44.006707 223679 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/client.key ... I0221 08:54:44.006730 223679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/client.key: {Name:mk6b07f68ad6023650adafd135358280d1825bb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:54:44.006871 223679 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/apiserver.key.c7fa3a9e I0221 08:54:44.006897 223679 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1] I0221 08:54:44.294014 223679 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/apiserver.crt.c7fa3a9e ... I0221 08:54:44.294052 223679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/apiserver.crt.c7fa3a9e: {Name:mkb18de625bf9d4b1da4d8c0e20b7c74d4689d72 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:54:44.294290 223679 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/apiserver.key.c7fa3a9e ... I0221 08:54:44.294313 223679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/apiserver.key.c7fa3a9e: {Name:mk342d0f120f3782db5aaad19a32574ae0c04f8d Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:54:44.294434 223679 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/apiserver.crt I0221 08:54:44.294491 223679 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/apiserver.key I0221 08:54:44.294537 223679 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/proxy-client.key I0221 08:54:44.294551 223679 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/proxy-client.crt with IP's: [] I0221 08:54:44.518976 223679 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/proxy-client.crt ... I0221 08:54:44.519036 223679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/proxy-client.crt: {Name:mk6f6f43267f4534ff28d48ba090d2600cf0e9de Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:54:44.519265 223679 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/proxy-client.key ... I0221 08:54:44.519291 223679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/proxy-client.key: {Name:mk80acd65e2e1b5036bf09d5fa5ec12f9e2086fa Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:54:44.519541 223679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem (1338 bytes) W0221 08:54:44.519593 223679 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550_empty.pem, impossibly tiny 0 bytes I0221 08:54:44.519633 223679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem (1679 bytes) I0221 08:54:44.519678 223679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem (1082 bytes) I0221 08:54:44.519730 223679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem (1123 bytes) I0221 08:54:44.519770 223679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem (1675 bytes) I0221 08:54:44.519828 223679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem (1708 bytes) I0221 08:54:44.521210 223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0221 08:54:44.558411 223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0221 08:54:44.579347 223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0221 08:54:44.604843 223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I0221 08:54:44.627275 223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0221 08:54:44.648374 223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0221 08:54:44.669879 223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0221 08:54:44.689847 223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0221 08:54:44.709519 223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /usr/share/ca-certificates/65502.pem (1708 bytes) I0221 08:54:44.733150 223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0221 08:54:44.756964 223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem --> /usr/share/ca-certificates/6550.pem (1338 bytes) I0221 08:54:44.778521 223679 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0221 08:54:44.793575 223679 ssh_runner.go:195] Run: openssl version I0221 08:54:44.798665 223679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65502.pem && ln -fs /usr/share/ca-certificates/65502.pem /etc/ssl/certs/65502.pem" I0221 08:54:44.808787 223679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65502.pem I0221 08:54:44.812470 223679 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Feb 21 08:30 /usr/share/ca-certificates/65502.pem I0221 08:54:44.812527 223679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65502.pem I0221 08:54:44.817903 223679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65502.pem /etc/ssl/certs/3ec20f2e.0" I0221 08:54:44.827601 223679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0221 08:54:44.865122 223679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0221 08:54:44.891782 223679 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb 21 08:26 /usr/share/ca-certificates/minikubeCA.pem I0221 08:54:44.891866 223679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0221 08:54:44.899116 223679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0221 08:54:44.909368 223679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6550.pem && ln -fs /usr/share/ca-certificates/6550.pem /etc/ssl/certs/6550.pem" I0221 08:54:44.920591 223679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6550.pem I0221 08:54:44.925480 223679 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Feb 21 08:30 /usr/share/ca-certificates/6550.pem I0221 08:54:44.925592 223679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6550.pem I0221 08:54:44.932674 223679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6550.pem /etc/ssl/certs/51391683.0" I0221 08:54:44.947547 223679 kubeadm.go:391] StartCluster: {Name:calico-20220221084934-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:calico-20220221084934-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 08:54:44.947712 223679 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0221 08:54:44.991618 223679 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0221 08:54:44.998885 223679 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0221 08:54:45.015354 223679 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver I0221 08:54:45.015414 223679 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0221 08:54:45.028145 223679 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0221 08:54:45.028193 223679 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0221 08:54:45.659427 223679 out.go:203] - Generating certificates and keys ... I0221 08:54:48.200933 223679 out.go:203] - Booting up control plane ... I0221 08:55:02.748988 223679 out.go:203] - Configuring RBAC rules ... I0221 08:55:03.208968 223679 cni.go:93] Creating CNI manager for "calico" I0221 08:55:03.211365 223679 out.go:176] * Configuring Calico (Container Networking Interface) ... I0221 08:55:03.211657 223679 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.4/kubectl ... I0221 08:55:03.211681 223679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202049 bytes) I0221 08:55:03.227608 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml I0221 08:55:04.757338 223679 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.529692552s) I0221 08:55:04.757387 223679 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0221 08:55:04.757470 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:04.757473 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=9a4c39b14752448ee8b5da7f8bb397c2a16d9ea9 minikube.k8s.io/name=calico-20220221084934-6550 minikube.k8s.io/updated_at=2022_02_21T08_55_04_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:04.850953 223679 ops.go:34] apiserver oom_adj: -16 I0221 08:55:04.851063 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:05.440068 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:05.940254 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:06.440215 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:06.940222 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:07.440213 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:07.939923 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:08.439546 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:08.940223 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:09.440124 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:09.939702 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:10.439575 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:10.940202 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:11.439703 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:11.939963 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:12.439836 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:12.939553 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:13.439654 223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 08:55:13.497568 223679 kubeadm.go:1020] duration metric: took 8.740153817s to wait for elevateKubeSystemPrivileges. I0221 08:55:13.497601 223679 kubeadm.go:393] StartCluster complete in 28.550066987s I0221 08:55:13.497616 223679 settings.go:142] acquiring lock: {Name:mk4400923ef35d7d80e21aa000bc7683aef0fb09 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:55:13.497683 223679 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 08:55:13.498747 223679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig: {Name:mkbaf7b3f4ffbe8a9d57707d423380523fa909e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 08:55:14.022464 223679 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20220221084934-6550" rescaled to 1 I0221 08:55:14.022509 223679 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true} I0221 08:55:14.024435 223679 out.go:176] * Verifying Kubernetes components... I0221 08:55:14.024485 223679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0221 08:55:14.022561 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0221 08:55:14.022577 223679 addons.go:415] enableAddons start: toEnable=map[], additional=[] I0221 08:55:14.024575 223679 addons.go:65] Setting storage-provisioner=true in profile "calico-20220221084934-6550" I0221 08:55:14.022730 223679 config.go:176] Loaded profile config "calico-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 08:55:14.024592 223679 addons.go:65] Setting default-storageclass=true in profile "calico-20220221084934-6550" I0221 08:55:14.024599 223679 addons.go:153] Setting addon storage-provisioner=true in "calico-20220221084934-6550" I0221 08:55:14.024606 223679 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20220221084934-6550" W0221 08:55:14.024612 223679 addons.go:165] addon storage-provisioner should already be in state true I0221 08:55:14.024642 223679 host.go:66] Checking if "calico-20220221084934-6550" exists ... I0221 08:55:14.024913 223679 cli_runner.go:133] Run: docker container inspect calico-20220221084934-6550 --format={{.State.Status}} I0221 08:55:14.025104 223679 cli_runner.go:133] Run: docker container inspect calico-20220221084934-6550 --format={{.State.Status}} I0221 08:55:14.038203 223679 node_ready.go:35] waiting up to 5m0s for node "calico-20220221084934-6550" to be "Ready" ... I0221 08:55:14.042490 223679 node_ready.go:49] node "calico-20220221084934-6550" has status "Ready":"True" I0221 08:55:14.042526 223679 node_ready.go:38] duration metric: took 4.281504ms waiting for node "calico-20220221084934-6550" to be "Ready" ... I0221 08:55:14.042537 223679 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 08:55:14.064216 223679 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-zcdj6" in "kube-system" namespace to be "Ready" ... I0221 08:55:14.068536 223679 out.go:176] - Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0221 08:55:14.068650 223679 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml I0221 08:55:14.068667 223679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0221 08:55:14.068718 223679 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220221084934-6550 I0221 08:55:14.071204 223679 addons.go:153] Setting addon default-storageclass=true in "calico-20220221084934-6550" W0221 08:55:14.071226 223679 addons.go:165] addon default-storageclass should already be in state true I0221 08:55:14.071248 223679 host.go:66] Checking if "calico-20220221084934-6550" exists ... I0221 08:55:14.071675 223679 cli_runner.go:133] Run: docker container inspect calico-20220221084934-6550 --format={{.State.Status}} I0221 08:55:14.095438 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.67.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0221 08:55:14.121614 223679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49364 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/calico-20220221084934-6550/id_rsa Username:docker} I0221 08:55:14.130797 223679 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml I0221 08:55:14.130824 223679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0221 08:55:14.130878 223679 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220221084934-6550 I0221 08:55:14.166553 223679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49364 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/calico-20220221084934-6550/id_rsa Username:docker} I0221 08:55:14.505375 223679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0221 08:55:14.506353 223679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0221 08:55:16.015822 223679 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.67.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.92034198s) I0221 08:55:16.015851 223679 start.go:777] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS I0221 08:55:16.020294 223679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.514878245s) I0221 08:55:16.106779 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:16.116155 223679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.609765842s) I0221 08:55:16.117844 223679 out.go:176] * Enabled addons: default-storageclass, storage-provisioner I0221 08:55:16.117871 223679 addons.go:417] enableAddons completed in 2.095295955s I0221 08:55:18.608129 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:21.084145 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:23.583507 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:26.082513 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:28.584036 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:30.607366 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:32.608422 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:34.608830 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:37.082853 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:39.082914 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:41.084278 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:43.583801 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:46.104227 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:48.608316 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:51.082452 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:53.082812 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:55.604982 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:55:58.083480 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:00.107900 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:02.108600 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:04.109005 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:06.608183 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:09.083257 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:11.584369 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:13.603328 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:15.607461 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:17.608185 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:20.103368 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:22.106959 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:24.109509 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:26.606973 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:28.607609 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:31.082276 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:33.107320 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:35.583226 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:38.107435 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:40.606736 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:43.082434 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:45.107171 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:47.583447 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:49.608204 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:51.608560 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:54.108380 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:56.583351 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:56:59.083417 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:01.108902 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:03.608727 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:06.083201 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:08.606947 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:11.085043 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:13.606594 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:16.104269 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:18.582815 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:20.585066 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:23.083375 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:25.108449 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:27.607457 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:29.607786 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:32.085234 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:34.109374 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:36.583295 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:39.105966 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:41.606692 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:44.106976 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:46.583983 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:49.084072 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:51.112230 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:53.606853 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:55.607543 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:57:58.108377 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:00.608452 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:03.082697 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:05.107411 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:07.583427 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:10.086403 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:12.582090 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:14.607319 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:17.083915 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:19.607890 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:22.082238 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:24.107976 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:26.608511 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:29.107566 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:31.108790 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:33.582823 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:35.586175 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:37.607126 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:40.082258 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:42.108072 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:44.607510 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:46.608936 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:48.609972 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:51.082477 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:53.105968 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:55.582165 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:57.606112 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:58:59.608167 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:59:02.106572 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:59:04.107313 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:59:06.108123 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:59:08.108992 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:59:10.582664 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:59:12.583673 223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False" I0221 08:59:14.112706 223679 pod_ready.go:81] duration metric: took 4m0.048450561s waiting for pod "calico-node-zcdj6" in "kube-system" namespace to be "Ready" ... E0221 08:59:14.112734 223679 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition I0221 08:59:14.112746 223679 pod_ready.go:78] waiting up to 5m0s for pod "etcd-calico-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:14.117793 223679 pod_ready.go:92] pod "etcd-calico-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 08:59:14.117820 223679 pod_ready.go:81] duration metric: took 5.066157ms waiting for pod "etcd-calico-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:14.117832 223679 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-calico-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:14.122627 223679 pod_ready.go:92] pod "kube-apiserver-calico-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 08:59:14.122647 223679 pod_ready.go:81] duration metric: took 4.807147ms waiting for pod "kube-apiserver-calico-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:14.122656 223679 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-calico-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:14.127594 223679 pod_ready.go:92] pod "kube-controller-manager-calico-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 08:59:14.127616 223679 pod_ready.go:81] duration metric: took 4.954276ms waiting for pod "kube-controller-manager-calico-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:14.127627 223679 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-kwcvx" in "kube-system" namespace to be "Ready" ... I0221 08:59:14.480801 223679 pod_ready.go:92] pod "kube-proxy-kwcvx" in "kube-system" namespace has status "Ready":"True" I0221 08:59:14.480829 223679 pod_ready.go:81] duration metric: took 353.19554ms waiting for pod "kube-proxy-kwcvx" in "kube-system" namespace to be "Ready" ... I0221 08:59:14.480842 223679 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-calico-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:14.879906 223679 pod_ready.go:92] pod "kube-scheduler-calico-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 08:59:14.879927 223679 pod_ready.go:81] duration metric: took 399.077104ms waiting for pod "kube-scheduler-calico-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 08:59:14.879937 223679 pod_ready.go:38] duration metric: took 4m0.837387313s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 08:59:14.879961 223679 api_server.go:51] waiting for apiserver process to appear ... I0221 08:59:14.880012 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0221 08:59:14.942433 223679 logs.go:274] 1 containers: [5b808a7ef4a2] I0221 08:59:14.942510 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0221 08:59:15.037787 223679 logs.go:274] 1 containers: [96cc9489b33e] I0221 08:59:15.037848 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0221 08:59:15.134487 223679 logs.go:274] 0 containers: [] W0221 08:59:15.134520 223679 logs.go:276] No container was found matching "coredns" I0221 08:59:15.134573 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0221 08:59:15.229656 223679 logs.go:274] 1 containers: [f012d1d45e22] I0221 08:59:15.229733 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0221 08:59:15.320906 223679 logs.go:274] 1 containers: [449cc37a92fe] I0221 08:59:15.320985 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0221 08:59:15.417453 223679 logs.go:274] 0 containers: [] W0221 08:59:15.417481 223679 logs.go:276] No container was found matching "kubernetes-dashboard" I0221 08:59:15.417528 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0221 08:59:15.513893 223679 logs.go:274] 2 containers: [528acfa448ce f6cf402c0c9d] I0221 08:59:15.513990 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0221 08:59:15.550415 223679 logs.go:274] 1 containers: [cddc9ef001f2] I0221 08:59:15.550454 223679 logs.go:123] Gathering logs for dmesg ... I0221 08:59:15.550465 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0221 08:59:15.576242 223679 logs.go:123] Gathering logs for etcd [96cc9489b33e] ... I0221 08:59:15.576295 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc9489b33e" I0221 08:59:15.618102 223679 logs.go:123] Gathering logs for kube-proxy [449cc37a92fe] ... I0221 08:59:15.618136 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 449cc37a92fe" I0221 08:59:15.656954 223679 logs.go:123] Gathering logs for storage-provisioner [528acfa448ce] ... I0221 08:59:15.656987 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 528acfa448ce" I0221 08:59:15.722111 223679 logs.go:123] Gathering logs for storage-provisioner [f6cf402c0c9d] ... I0221 08:59:15.722147 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6cf402c0c9d" I0221 08:59:15.808702 223679 logs.go:123] Gathering logs for Docker ... I0221 08:59:15.808737 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0221 08:59:15.889269 223679 logs.go:123] Gathering logs for container status ... I0221 08:59:15.889312 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0221 08:59:15.945538 223679 logs.go:123] Gathering logs for kubelet ... I0221 08:59:15.945571 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0221 08:59:16.147141 223679 logs.go:123] Gathering logs for describe nodes ... I0221 08:59:16.147186 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0221 08:59:16.338070 223679 logs.go:123] Gathering logs for kube-apiserver [5b808a7ef4a2] ... I0221 08:59:16.338111 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b808a7ef4a2" I0221 08:59:16.431605 223679 logs.go:123] Gathering logs for kube-scheduler [f012d1d45e22] ... I0221 08:59:16.431645 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f012d1d45e22" I0221 08:59:16.530228 223679 logs.go:123] Gathering logs for kube-controller-manager [cddc9ef001f2] ... I0221 08:59:16.530264 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cddc9ef001f2" I0221 08:59:19.103148 223679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0221 08:59:19.129062 223679 api_server.go:71] duration metric: took 4m5.106529752s to wait for apiserver process to appear ... I0221 08:59:19.129100 223679 api_server.go:87] waiting for apiserver healthz status ... I0221 08:59:19.129165 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0221 08:59:19.224393 223679 logs.go:274] 1 containers: [5b808a7ef4a2] I0221 08:59:19.224460 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0221 08:59:19.319828 223679 logs.go:274] 1 containers: [96cc9489b33e] I0221 08:59:19.319900 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0221 08:59:19.418463 223679 logs.go:274] 0 containers: [] W0221 08:59:19.418495 223679 logs.go:276] No container was found matching "coredns" I0221 08:59:19.418541 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0221 08:59:19.516431 223679 logs.go:274] 1 containers: [f012d1d45e22] I0221 08:59:19.516522 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0221 08:59:19.607457 223679 logs.go:274] 1 containers: [449cc37a92fe] I0221 08:59:19.607543 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0221 08:59:19.644308 223679 logs.go:274] 0 containers: [] W0221 08:59:19.644330 223679 logs.go:276] No container was found matching "kubernetes-dashboard" I0221 08:59:19.644368 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0221 08:59:19.677987 223679 logs.go:274] 1 containers: [528acfa448ce] I0221 08:59:19.678065 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0221 08:59:19.711573 223679 logs.go:274] 1 containers: [cddc9ef001f2] I0221 08:59:19.711614 223679 logs.go:123] Gathering logs for dmesg ... I0221 08:59:19.711634 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0221 08:59:19.739316 223679 logs.go:123] Gathering logs for describe nodes ... I0221 08:59:19.739352 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0221 08:59:19.829642 223679 logs.go:123] Gathering logs for etcd [96cc9489b33e] ... I0221 08:59:19.829686 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc9489b33e" I0221 08:59:19.928327 223679 logs.go:123] Gathering logs for kube-scheduler [f012d1d45e22] ... I0221 08:59:19.928367 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f012d1d45e22" I0221 08:59:20.030039 223679 logs.go:123] Gathering logs for storage-provisioner [528acfa448ce] ... I0221 08:59:20.030084 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 528acfa448ce" I0221 08:59:20.115493 223679 logs.go:123] Gathering logs for kubelet ... I0221 08:59:20.115539 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0221 08:59:20.289828 223679 logs.go:123] Gathering logs for kube-proxy [449cc37a92fe] ... I0221 08:59:20.289874 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 449cc37a92fe" I0221 08:59:20.351337 223679 logs.go:123] Gathering logs for kube-controller-manager [cddc9ef001f2] ... I0221 08:59:20.351388 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cddc9ef001f2" I0221 08:59:20.480018 223679 logs.go:123] Gathering logs for Docker ... I0221 08:59:20.480056 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0221 08:59:20.594320 223679 logs.go:123] Gathering logs for container status ... I0221 08:59:20.594358 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0221 08:59:20.641023 223679 logs.go:123] Gathering logs for kube-apiserver [5b808a7ef4a2] ... I0221 08:59:20.641062 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b808a7ef4a2" I0221 08:59:23.238237 223679 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ... I0221 08:59:23.244347 223679 api_server.go:266] https://192.168.67.2:8443/healthz returned 200: ok I0221 08:59:23.246494 223679 api_server.go:140] control plane version: v1.23.4 I0221 08:59:23.246519 223679 api_server.go:130] duration metric: took 4.1174116s to wait for apiserver health ... I0221 08:59:23.246529 223679 system_pods.go:43] waiting for kube-system pods to appear ... I0221 08:59:23.246581 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0221 08:59:23.331088 223679 logs.go:274] 1 containers: [5b808a7ef4a2] I0221 08:59:23.331164 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0221 08:59:23.425220 223679 logs.go:274] 1 containers: [96cc9489b33e] I0221 08:59:23.425297 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0221 08:59:23.510198 223679 logs.go:274] 0 containers: [] W0221 08:59:23.510230 223679 logs.go:276] No container was found matching "coredns" I0221 08:59:23.510284 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0221 08:59:23.548794 223679 logs.go:274] 1 containers: [f012d1d45e22] I0221 08:59:23.548859 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0221 08:59:23.642803 223679 logs.go:274] 1 containers: [449cc37a92fe] I0221 08:59:23.642891 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0221 08:59:23.735232 223679 logs.go:274] 0 containers: [] W0221 08:59:23.735263 223679 logs.go:276] No container was found matching "kubernetes-dashboard" I0221 08:59:23.735316 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0221 08:59:23.820175 223679 logs.go:274] 1 containers: [528acfa448ce] I0221 08:59:23.820245 223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0221 08:59:23.911162 223679 logs.go:274] 1 containers: [cddc9ef001f2] I0221 08:59:23.911205 223679 logs.go:123] Gathering logs for storage-provisioner [528acfa448ce] ... I0221 08:59:23.911218 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 528acfa448ce" I0221 08:59:24.010277 223679 logs.go:123] Gathering logs for kubelet ... I0221 08:59:24.010307 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0221 08:59:24.188331 223679 logs.go:123] Gathering logs for dmesg ... I0221 08:59:24.188378 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0221 08:59:24.235517 223679 logs.go:123] Gathering logs for describe nodes ... I0221 08:59:24.235564 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0221 08:59:24.433778 223679 logs.go:123] Gathering logs for kube-scheduler [f012d1d45e22] ... I0221 08:59:24.433815 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f012d1d45e22" I0221 08:59:24.542462 223679 logs.go:123] Gathering logs for Docker ... I0221 08:59:24.542562 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0221 08:59:24.683898 223679 logs.go:123] Gathering logs for container status ... I0221 08:59:24.683938 223679 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0221 08:59:24.747804 223679 logs.go:123] Gathering logs for kube-apiserver [5b808a7ef4a2] ... I0221 08:59:24.747846 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b808a7ef4a2" I0221 08:59:24.839623 223679 logs.go:123] Gathering logs for etcd [96cc9489b33e] ... I0221 08:59:24.839664 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc9489b33e" I0221 08:59:24.933214 223679 logs.go:123] Gathering logs for kube-proxy [449cc37a92fe] ... I0221 08:59:24.933249 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 449cc37a92fe" I0221 08:59:24.970081 223679 logs.go:123] Gathering logs for kube-controller-manager [cddc9ef001f2] ... I0221 08:59:24.970115 223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cddc9ef001f2" I0221 08:59:27.559651 223679 system_pods.go:59] 9 kube-system pods found I0221 08:59:27.559689 223679 system_pods.go:61] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:27.559697 223679 system_pods.go:61] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:27.559703 223679 system_pods.go:61] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:27.559708 223679 system_pods.go:61] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:27.559713 223679 system_pods.go:61] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:27.559717 223679 system_pods.go:61] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:27.559722 223679 system_pods.go:61] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:27.559726 223679 system_pods.go:61] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:27.559734 223679 system_pods.go:61] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:27.559742 223679 system_pods.go:74] duration metric: took 4.313209437s to wait for pod list to return data ... I0221 08:59:27.559749 223679 default_sa.go:34] waiting for default service account to be created ... I0221 08:59:27.562671 223679 default_sa.go:45] found service account: "default" I0221 08:59:27.562697 223679 default_sa.go:55] duration metric: took 2.939018ms for default service account to be created ... I0221 08:59:27.562709 223679 system_pods.go:116] waiting for k8s-apps to be running ... I0221 08:59:27.606750 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:27.606791 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:27.606820 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:27.606832 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:27.606849 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:27.606856 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:27.606863 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:27.606870 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:27.606880 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:27.606889 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:27.606913 223679 retry.go:31] will retry after 263.082536ms: missing components: kube-dns I0221 08:59:27.875522 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:27.875558 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:27.875569 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:27.875575 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:27.875581 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:27.875586 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:27.875590 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:27.875593 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:27.875598 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:27.875603 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:27.875619 223679 retry.go:31] will retry after 381.329545ms: missing components: kube-dns I0221 08:59:28.262703 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:28.262737 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:28.262745 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:28.262752 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:28.262757 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:28.262764 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:28.262770 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:28.262776 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:28.262782 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:28.262789 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:28.262812 223679 retry.go:31] will retry after 422.765636ms: missing components: kube-dns I0221 08:59:28.708387 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:28.708425 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:28.708467 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:28.708488 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:28.708506 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:28.708519 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:28.708531 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:28.708537 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:28.708544 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:28.708559 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:28.708575 223679 retry.go:31] will retry after 473.074753ms: missing components: kube-dns I0221 08:59:29.187326 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:29.187359 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:29.187367 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:29.187374 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:29.187379 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:29.187384 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:29.187388 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:29.187392 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:29.187396 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:29.187401 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:29.187414 223679 retry.go:31] will retry after 587.352751ms: missing components: kube-dns I0221 08:59:29.807999 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:29.808041 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:29.808052 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:29.808062 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:29.808069 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:29.808077 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:29.808087 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:29.808093 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:29.808103 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:29.808113 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:29.808133 223679 retry.go:31] will retry after 834.206799ms: missing components: kube-dns I0221 08:59:30.649684 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:30.649731 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:30.649746 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:30.649756 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:30.649766 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:30.649778 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:30.649792 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:30.649806 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:30.649817 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:30.649831 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:30.649852 223679 retry.go:31] will retry after 746.553905ms: missing components: kube-dns I0221 08:59:31.403363 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:31.403414 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:31.403426 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:31.403438 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:31.403446 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:31.403455 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:31.403466 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:31.403474 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:31.403488 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:31.403498 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:31.403522 223679 retry.go:31] will retry after 987.362415ms: missing components: kube-dns I0221 08:59:32.397015 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:32.397055 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:32.397064 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:32.397075 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:32.397083 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:32.397090 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:32.397103 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:32.397110 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:32.397121 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:32.397132 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:32.397148 223679 retry.go:31] will retry after 1.189835008s: missing components: kube-dns I0221 08:59:33.607429 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:33.607467 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:33.607475 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:33.607484 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:33.607493 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:33.607500 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:33.607507 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:33.607531 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:33.607541 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:33.607550 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:33.607570 223679 retry.go:31] will retry after 1.677229867s: missing components: kube-dns I0221 08:59:35.291721 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:35.291757 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:35.291767 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:35.291776 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:35.291783 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:35.291792 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:35.291798 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:35.291809 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:35.291815 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:35.291826 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:35.291840 223679 retry.go:31] will retry after 2.346016261s: missing components: kube-dns I0221 08:59:37.644075 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:37.644109 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:37.644117 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:37.644124 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:37.644131 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:37.644136 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:37.644140 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:37.644144 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:37.644147 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:37.644153 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:37.644169 223679 retry.go:31] will retry after 3.36678925s: missing components: kube-dns I0221 08:59:41.020218 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:41.020262 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:41.020274 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:41.020284 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:41.020290 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:41.020296 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:41.020301 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:41.020307 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:41.020324 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:41.020332 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:41.020346 223679 retry.go:31] will retry after 3.11822781s: missing components: kube-dns I0221 08:59:44.146493 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:44.146526 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:44.146534 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:44.146544 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:44.146552 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:44.146563 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:44.146570 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:44.146582 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:44.146593 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:44.146603 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:44.146623 223679 retry.go:31] will retry after 4.276119362s: missing components: kube-dns I0221 08:59:48.430784 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:48.430822 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:48.430855 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:48.430867 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:48.430880 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:48.430889 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:48.430901 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:48.430911 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:48.430921 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:48.430931 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:48.431005 223679 retry.go:31] will retry after 5.167232101s: missing components: kube-dns I0221 08:59:53.607863 223679 system_pods.go:86] 9 kube-system pods found I0221 08:59:53.607910 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 08:59:53.607925 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 08:59:53.607936 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 08:59:53.607950 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 08:59:53.607957 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 08:59:53.607965 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 08:59:53.607971 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 08:59:53.607979 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 08:59:53.607991 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 08:59:53.608009 223679 retry.go:31] will retry after 6.994901864s: missing components: kube-dns I0221 09:00:00.608725 223679 system_pods.go:86] 9 kube-system pods found I0221 09:00:00.608757 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 09:00:00.608767 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 09:00:00.608774 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:00:00.608778 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 09:00:00.608783 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 09:00:00.608788 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 09:00:00.608791 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 09:00:00.608796 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 09:00:00.608801 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:00:00.608818 223679 retry.go:31] will retry after 7.91826225s: missing components: kube-dns I0221 09:00:08.534545 223679 system_pods.go:86] 9 kube-system pods found I0221 09:00:08.534589 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 09:00:08.534602 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 09:00:08.534613 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:00:08.534621 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 09:00:08.534630 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 09:00:08.534642 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 09:00:08.534654 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 09:00:08.534665 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 09:00:08.534678 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:00:08.534700 223679 retry.go:31] will retry after 9.953714808s: missing components: kube-dns I0221 09:00:18.494832 223679 system_pods.go:86] 9 kube-system pods found I0221 09:00:18.494873 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 09:00:18.494884 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 09:00:18.494893 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:00:18.494898 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 09:00:18.494903 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 09:00:18.494909 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 09:00:18.494918 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 09:00:18.494925 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 09:00:18.494935 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:00:18.494956 223679 retry.go:31] will retry after 15.120437328s: missing components: kube-dns I0221 09:00:33.622907 223679 system_pods.go:86] 9 kube-system pods found I0221 09:00:33.622950 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 09:00:33.622961 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 09:00:33.622970 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:00:33.622977 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 09:00:33.622983 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 09:00:33.622989 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 09:00:33.623036 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 09:00:33.623050 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 09:00:33.623058 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:00:33.623079 223679 retry.go:31] will retry after 14.90607158s: missing components: kube-dns I0221 09:00:48.536869 223679 system_pods.go:86] 9 kube-system pods found I0221 09:00:48.536919 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 09:00:48.536931 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 09:00:48.536941 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:00:48.536949 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 09:00:48.536955 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 09:00:48.536959 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 09:00:48.536964 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 09:00:48.536968 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 09:00:48.536982 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running I0221 09:00:48.536998 223679 retry.go:31] will retry after 18.465989061s: missing components: kube-dns I0221 09:01:07.010825 223679 system_pods.go:86] 9 kube-system pods found I0221 09:01:07.010865 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 09:01:07.010877 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 09:01:07.010887 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:01:07.010895 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 09:01:07.010902 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 09:01:07.010908 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 09:01:07.010925 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 09:01:07.010931 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 09:01:07.010939 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running I0221 09:01:07.010960 223679 retry.go:31] will retry after 25.219510332s: missing components: kube-dns I0221 09:01:32.236004 223679 system_pods.go:86] 9 kube-system pods found I0221 09:01:32.236044 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 09:01:32.236056 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 09:01:32.236064 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:01:32.236072 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 09:01:32.236078 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 09:01:32.236084 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 09:01:32.236091 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 09:01:32.236097 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 09:01:32.236107 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:01:32.236125 223679 retry.go:31] will retry after 35.078569648s: missing components: kube-dns I0221 09:02:07.320903 223679 system_pods.go:86] 9 kube-system pods found I0221 09:02:07.320944 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 09:02:07.320955 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 09:02:07.320961 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:02:07.320967 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 09:02:07.320973 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 09:02:07.320977 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 09:02:07.320981 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 09:02:07.320985 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 09:02:07.320990 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:02:07.321002 223679 retry.go:31] will retry after 50.027701973s: missing components: kube-dns I0221 09:02:57.356331 223679 system_pods.go:86] 9 kube-system pods found I0221 09:02:57.356379 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 09:02:57.356394 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 09:02:57.356411 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:02:57.356420 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 09:02:57.356428 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 09:02:57.356435 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 09:02:57.356448 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 09:02:57.356454 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 09:02:57.356467 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:02:57.356486 223679 retry.go:31] will retry after 47.463338706s: missing components: kube-dns I0221 09:03:44.827562 223679 system_pods.go:86] 9 kube-system pods found I0221 09:03:44.827595 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 09:03:44.827608 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 09:03:44.827618 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:03:44.827630 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 09:03:44.827637 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 09:03:44.827644 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 09:03:44.827654 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 09:03:44.827659 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 09:03:44.827674 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:03:44.830160 223679 out.go:176] W0221 09:03:44.830324 223679 out.go:241] X Exiting due to GUEST_START: wait 5m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns X Exiting due to GUEST_START: wait 5m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns W0221 09:03:44.830341 223679 out.go:241] * * W0221 09:03:44.831471 223679 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮ │ │ │ * If the above advice does not help, please let us know: │ │ https://github.com/kubernetes/minikube/issues/new/choose │ │ │ │ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │ │ │ ╰─────────────────────────────────────────────────────────────────────────────────────────────╯ ╭─────────────────────────────────────────────────────────────────────────────────────────────╮ │ │ │ * If the above advice does not help, please let us know: │ │ https://github.com/kubernetes/minikube/issues/new/choose │ │ │ │ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │ │ │ ╰─────────────────────────────────────────────────────────────────────────────────────────────╯ I0221 09:03:44.832903 223679 out.go:176] ** /stderr ** net_test.go:101: failed start: exit status 80 --- FAIL: TestNetworkPlugins/group/calico/Start (553.27s) === FAIL: . TestNetworkPlugins/group/calico (559.37s) net_test.go:198: "calico" test finished in 14m10.880041084s, failed=true net_test.go:199: *** TestNetworkPlugins/group/calico FAILED at 2022-02-21 09:03:44.881321403 +0000 UTC m=+2317.643640979 helpers_test.go:223: -----------------------post-mortem-------------------------------- helpers_test.go:231: ======> post-mortem[TestNetworkPlugins/group/calico]: docker inspect <====== helpers_test.go:232: (dbg) Run: docker inspect calico-20220221084934-6550 helpers_test.go:236: (dbg) docker inspect calico-20220221084934-6550: -- stdout -- [ { "Id": "7ff1dcdb7d3889f4bd8ef1c5dbc904860568f88d539892eb1c405ab65e13be1c", "Created": "2022-02-21T08:54:39.336010404Z", "Path": "/usr/local/bin/entrypoint", "Args": [ "/sbin/init" ], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 224777, "ExitCode": 0, "Error": "", "StartedAt": "2022-02-21T08:54:39.741937439Z", "FinishedAt": "0001-01-01T00:00:00Z" }, "Image": "sha256:1312ccd2422d964b2df363d606d0c016d6acbc1ddf0211c26a74717f2897dc43", "ResolvConfPath": "/var/lib/docker/containers/7ff1dcdb7d3889f4bd8ef1c5dbc904860568f88d539892eb1c405ab65e13be1c/resolv.conf", "HostnamePath": "/var/lib/docker/containers/7ff1dcdb7d3889f4bd8ef1c5dbc904860568f88d539892eb1c405ab65e13be1c/hostname", "HostsPath": "/var/lib/docker/containers/7ff1dcdb7d3889f4bd8ef1c5dbc904860568f88d539892eb1c405ab65e13be1c/hosts", "LogPath": "/var/lib/docker/containers/7ff1dcdb7d3889f4bd8ef1c5dbc904860568f88d539892eb1c405ab65e13be1c/7ff1dcdb7d3889f4bd8ef1c5dbc904860568f88d539892eb1c405ab65e13be1c-json.log", "Name": "/calico-20220221084934-6550", "RestartCount": 0, "Driver": "overlay2", "Platform": "linux", "MountLabel": "", "ProcessLabel": "", "AppArmorProfile": "unconfined", "ExecIDs": null, "HostConfig": { "Binds": [ "/lib/modules:/lib/modules:ro", "calico-20220221084934-6550:/var" ], "ContainerIDFile": "", "LogConfig": { "Type": "json-file", "Config": {} }, "NetworkMode": "calico-20220221084934-6550", "PortBindings": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ] }, "RestartPolicy": { "Name": "no", "MaximumRetryCount": 0 }, "AutoRemove": false, "VolumeDriver": "", "VolumesFrom": null, "CapAdd": null, "CapDrop": null, "CgroupnsMode": "host", "Dns": [], "DnsOptions": [], "DnsSearch": [], "ExtraHosts": null, "GroupAdd": null, "IpcMode": "private", "Cgroup": "", "Links": null, "OomScoreAdj": 0, "PidMode": "", "Privileged": true, "PublishAllPorts": false, "ReadonlyRootfs": false, "SecurityOpt": [ "seccomp=unconfined", "apparmor=unconfined", "label=disable" ], "Tmpfs": { "/run": "", "/tmp": "" }, "UTSMode": "", "UsernsMode": "", "ShmSize": 67108864, "Runtime": "runc", "ConsoleSize": [ 0, 0 ], "Isolation": "", "CpuShares": 0, "Memory": 0, "NanoCpus": 2000000000, "CgroupParent": "", "BlkioWeight": 0, "BlkioWeightDevice": [], "BlkioDeviceReadBps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteIOps": null, "CpuPeriod": 0, "CpuQuota": 0, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": [ { "PathOnHost": "/dev/fuse", "PathInContainer": "/dev/fuse", "CgroupPermissions": "rwm" } ], "DeviceCgroupRules": null, "DeviceRequests": null, "KernelMemory": 0, "KernelMemoryTCP": 0, "MemoryReservation": 0, "MemorySwap": 0, "MemorySwappiness": null, "OomKillDisable": false, "PidsLimit": null, "Ulimits": null, "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0, "MaskedPaths": null, "ReadonlyPaths": null }, "GraphDriver": { "Data": { "LowerDir": "/var/lib/docker/overlay2/da0e34433eaaba2c59c7c66f013d3a1aa4769cf97fe1fa1986b5a6fbfa5f1ec8-init/diff:/var/lib/docker/overlay2/a149cc5f42de5038ba3e5f21bd82a6356aa9456aec11a3690e569db72ca8eaf5/diff:/var/lib/docker/overlay2/fce4ef34d92ff253276822af091ec16578b924486e0e565c7f34b479bdbe1606/diff:/var/lib/docker/overlay2/a38e10ed288ad73a8ce637c6219d5b13318bb72210ce4fb6291dd57d238df611/diff:/var/lib/docker/overlay2/93c18ddce49a4eb2d344b0579d7f58df999d53bb329042f78a5b3f40296f83cd/diff:/var/lib/docker/overlay2/d9ee14262b31fff87924cf913d835b02182e6a8dc3783ecb5f67dbd29faa079c/diff:/var/lib/docker/overlay2/8a0b39fc5ebcb9761ad433102244050fab339d3e027a5a30a7e01d121fe362e4/diff:/var/lib/docker/overlay2/7a6b41deef37b6400fac0c94871f73040d4cae157de2e929b7a0bf96f0e353aa/diff:/var/lib/docker/overlay2/49d913ee587718efcf521a65de0109df5c2c42644f75690beed97a437d623b66/diff:/var/lib/docker/overlay2/89e8fa960313dfb6ece4007576bf5d89e2b0ea68a2bde9de5527647a7df6ff0f/diff:/var/lib/docker/overlay2/0d2632e6ee2f4198f3c7c0d37c31ee03d33864e0131ae85dd3001adecfae53db/diff:/var/lib/docker/overlay2/b28a3b2bbe2e231d00863aaaf6be28d2e5a76197f2be7934f18720a71f2435a8/diff:/var/lib/docker/overlay2/6f88e83ff5bb26bf909248d8df5eacf4d01f72523927fd7433a925c525fbd274/diff:/var/lib/docker/overlay2/c2547d18d2639fa842b0977952fa69cffd7b076e8db357ef0ec62fae9676619b/diff:/var/lib/docker/overlay2/34465ba37bb8eef87906c38cb891a738c54a16dbf47d27fef95cee96f0b62c4f/diff:/var/lib/docker/overlay2/50d3383518bb8e6a01e1cfcae621b8c2238dc54ae94d63606d145633045bc6cd/diff:/var/lib/docker/overlay2/83561f107aed2bf9cb840ff66aceb882d0e39bcced79ef27ce67c17e063ba536/diff:/var/lib/docker/overlay2/bfb7c0c91e0a7649286abe3a6b02df4585a931ffa3a6b1e4846aa1b98eb0d997/diff:/var/lib/docker/overlay2/80589044da151f3d3eed7824d236bc8a19697182abca8610eb34f1c3cae53bf0/diff:/var/lib/docker/overlay2/5a336e956e467ad50830aa4b9ed5b65cb3ef08e6d49a4900acd0440ca3c03c3a/diff:/var/lib/docker/overlay2/f7b3446c0dca80a2053183e360de8f99e3a31e87ad633fcc23f14b2045643596/diff:/var/lib/docker/overlay2/03688a16272f13f172ed8baa181df2a1702e08edbc9c5a4a767265adb2c00b79/diff:/var/lib/docker/overlay2/c5445afaabab58221dbffedde261b3f9948116ccba1344b6f3dd4fdfaf5c98cd/diff:/var/lib/docker/overlay2/704464fd20f23f93d0ad3c3c58103f01fc8a0d76d70bc191f718273e7412bb42/diff:/var/lib/docker/overlay2/a344f9ca1bf1b674ebf560c2a0498913ed90c836e3b4f5b1968022f7f7e54055/diff:/var/lib/docker/overlay2/929f053e6a327a7203e8d8c281f827d02ad922fdabbdfcc5453003daba88cdf4/diff:/var/lib/docker/overlay2/d6a160f0ffe94e5abf99d928a638f6a6220b24c7ba643a438c7557146d240f70/diff:/var/lib/docker/overlay2/c6b58e187a2df10fd55e19e6ad38c20f5ebf5e62617a59b6ce4afe0fc712c0f6/diff:/var/lib/docker/overlay2/fef43deeff878af42cc1f31a1f265b66e642aa166cec02eba5a0c82fb6d70872/diff:/var/lib/docker/overlay2/0d7bbf19182056a05efca89485df54da5fba9903000517fc9e7c2072aa7ba62e/diff:/var/lib/docker/overlay2/5840748964e9ce3890a2fa4fc2e3fb0635426f944b42b3128808a7aca816bf48/diff:/var/lib/docker/overlay2/a94126962e9f346960e3f570b96e0a6e987d255fcba30bf3fd76b227e66275c6/diff:/var/lib/docker/overlay2/211ba88e1557a352838b4896dff8efac70330c1424cff237613c4186994ec037/diff:/var/lib/docker/overlay2/c04720d23411d6693832990e2c79bc529ade609401eaeee8d159f5a6d74d986e/diff:/var/lib/docker/overlay2/70bfe328c89d3a3902a4ba591874fe936425b208ec8e39c2e25d60f21b212300/diff:/var/lib/docker/overlay2/7dcce8bec6dd9561387116c9f724b9101e94e15c0e19c7880ce2b4e96b2921a6/diff:/var/lib/docker/overlay2/81463813805e91be42d9f71b5c63823a80abaaa42d8638034eaddcd66c148310/diff:/var/lib/docker/overlay2/c7dbb1344cf7167b07d58d49902e0db9335bc92a71a324c15dadc14c0aa67c45/diff:/var/lib/docker/overlay2/909df0d6b92b92bcf6e376f0e33d120c53cd24ad0b997d31b5a84f3d1ab89769/diff:/var/lib/docker/overlay2/eacfeaee6ed4aa14e847adad2c6d03439c53944654cc2779cc5c80d239ab323c/diff:/var/lib/docker/overlay2/48c58492169f9043495b11b47439f4dcdeb168525e5bb3a8c130c1ddd5ec882d/diff:/var/lib/docker/overlay2/4520a69f91e766b7694473c2107cc0644394d65da900259eb5933779accdc86d/diff:/var/lib/docker/overlay2/68aafdd300c4b4ebf94976a352959e75fa170653a69e13da38650a447cd5cb2f/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394cfc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff", "MergedDir": "/var/lib/docker/overlay2/da0e34433eaaba2c59c7c66f013d3a1aa4769cf97fe1fa1986b5a6fbfa5f1ec8/merged", "UpperDir": "/var/lib/docker/overlay2/da0e34433eaaba2c59c7c66f013d3a1aa4769cf97fe1fa1986b5a6fbfa5f1ec8/diff", "WorkDir": "/var/lib/docker/overlay2/da0e34433eaaba2c59c7c66f013d3a1aa4769cf97fe1fa1986b5a6fbfa5f1ec8/work" }, "Name": "overlay2" }, "Mounts": [ { "Type": "bind", "Source": "/lib/modules", "Destination": "/lib/modules", "Mode": "ro", "RW": false, "Propagation": "rprivate" }, { "Type": "volume", "Name": "calico-20220221084934-6550", "Source": "/var/lib/docker/volumes/calico-20220221084934-6550/_data", "Destination": "/var", "Driver": "local", "Mode": "z", "RW": true, "Propagation": "" } ], "Config": { "Hostname": "calico-20220221084934-6550", "Domainname": "", "User": "root", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "ExposedPorts": { "22/tcp": {}, "2376/tcp": {}, "32443/tcp": {}, "5000/tcp": {}, "8443/tcp": {} }, "Tty": true, "OpenStdin": false, "StdinOnce": false, "Env": [ "container=docker", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" ], "Cmd": null, "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2", "Volumes": null, "WorkingDir": "", "Entrypoint": [ "/usr/local/bin/entrypoint", "/sbin/init" ], "OnBuild": null, "Labels": { "created_by.minikube.sigs.k8s.io": "true", "mode.minikube.sigs.k8s.io": "calico-20220221084934-6550", "name.minikube.sigs.k8s.io": "calico-20220221084934-6550", "role.minikube.sigs.k8s.io": "" }, "StopSignal": "SIGRTMIN+3" }, "NetworkSettings": { "Bridge": "", "SandboxID": "b3f6b92c299fab2b0618d523c664134a2b3ea294194e4ae464a452f87d8939d2", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49364" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49363" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49360" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49362" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49361" } ] }, "SandboxKey": "/var/run/docker/netns/b3f6b92c299f", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null, "EndpointID": "", "Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "MacAddress": "", "Networks": { "calico-20220221084934-6550": { "IPAMConfig": { "IPv4Address": "192.168.67.2" }, "Links": null, "Aliases": [ "7ff1dcdb7d38", "calico-20220221084934-6550" ], "NetworkID": "259ea390e5594c5573e56c602cbdaf2a91d5b217fce89343d624015685255bcb", "EndpointID": "8d63df32eefc92663e22f4efb2bd16fbb816ecbe394d0b6328ad38e288478661", "Gateway": "192.168.67.1", "IPAddress": "192.168.67.2", "IPPrefixLen": 24, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:c0:a8:43:02", "DriverOpts": null } } } } ] -- /stdout -- helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p calico-20220221084934-6550 -n calico-20220221084934-6550 helpers_test.go:245: <<< TestNetworkPlugins/group/calico FAILED: start of post-mortem logs <<< helpers_test.go:246: ======> post-mortem[TestNetworkPlugins/group/calico]: minikube logs <====== helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 -p calico-20220221084934-6550 logs -n 25 helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p calico-20220221084934-6550 logs -n 25: (1.835390614s) helpers_test.go:253: TestNetworkPlugins/group/calico logs: -- stdout -- * * ==> Audit <== * |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| | pause | -p pause-20220221085158-6550 | pause-20220221085158-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:25 UTC | Mon, 21 Feb 2022 08:53:26 UTC | | | --alsologtostderr -v=5 | | | | | | | unpause | -p pause-20220221085158-6550 | pause-20220221085158-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:27 UTC | Mon, 21 Feb 2022 08:53:28 UTC | | | --alsologtostderr -v=5 | | | | | | | pause | -p pause-20220221085158-6550 | pause-20220221085158-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:28 UTC | Mon, 21 Feb 2022 08:53:29 UTC | | | --alsologtostderr -v=5 | | | | | | | delete | -p | kubernetes-upgrade-20220221085141-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:22 UTC | Mon, 21 Feb 2022 08:53:29 UTC | | | kubernetes-upgrade-20220221085141-6550 | | | | | | | delete | -p pause-20220221085158-6550 | pause-20220221085158-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:29 UTC | Mon, 21 Feb 2022 08:53:32 UTC | | | --alsologtostderr -v=5 | | | | | | | profile | list --output json | minikube | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:32 UTC | Mon, 21 Feb 2022 08:53:32 UTC | | delete | -p pause-20220221085158-6550 | pause-20220221085158-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:33 UTC | Mon, 21 Feb 2022 08:53:33 UTC | | start | -p | stopped-upgrade-20220221085315-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:00 UTC | Mon, 21 Feb 2022 08:54:26 UTC | | | stopped-upgrade-20220221085315-6550 | | | | | | | | --memory=2200 --alsologtostderr | | | | | | | | -v=1 --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | logs | -p | stopped-upgrade-20220221085315-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:27 UTC | Mon, 21 Feb 2022 08:54:29 UTC | | | stopped-upgrade-20220221085315-6550 | | | | | | | delete | -p | stopped-upgrade-20220221085315-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:29 UTC | Mon, 21 Feb 2022 08:54:31 UTC | | | stopped-upgrade-20220221085315-6550 | | | | | | | start | -p | cert-expiration-20220221085105-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:40 UTC | Mon, 21 Feb 2022 08:54:44 UTC | | | cert-expiration-20220221085105-6550 | | | | | | | | --memory=2048 | | | | | | | | --cert-expiration=8760h | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | delete | -p | cert-expiration-20220221085105-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:44 UTC | Mon, 21 Feb 2022 08:54:47 UTC | | | cert-expiration-20220221085105-6550 | | | | | | | start | -p cilium-20220221084934-6550 | cilium-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:33 UTC | Mon, 21 Feb 2022 08:55:10 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=cilium --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p cilium-20220221084934-6550 | cilium-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:55:15 UTC | Mon, 21 Feb 2022 08:55:16 UTC | | | pgrep -a kubelet | | | | | | | delete | -p cilium-20220221084934-6550 | cilium-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:55:29 UTC | Mon, 21 Feb 2022 08:55:33 UTC | | start | -p false-20220221084934-6550 | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:55:33 UTC | Mon, 21 Feb 2022 08:56:15 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=false --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p false-20220221084934-6550 | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:56:15 UTC | Mon, 21 Feb 2022 08:56:16 UTC | | | pgrep -a kubelet | | | | | | | start | -p auto-20220221084933-6550 | auto-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:29 UTC | Mon, 21 Feb 2022 09:01:45 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p auto-20220221084933-6550 | auto-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:01:45 UTC | Mon, 21 Feb 2022 09:01:45 UTC | | | pgrep -a kubelet | | | | | | | -p | false-20220221084934-6550 logs | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:02:41 UTC | Mon, 21 Feb 2022 09:02:42 UTC | | | -n 25 | | | | | | | delete | -p false-20220221084934-6550 | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:02:43 UTC | Mon, 21 Feb 2022 09:02:46 UTC | | -p | custom-weave-20220221084934-6550 | custom-weave-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:27 UTC | Mon, 21 Feb 2022 09:03:28 UTC | | | logs -n 25 | | | | | | | delete | -p | custom-weave-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:29 UTC | Mon, 21 Feb 2022 09:03:32 UTC | | | custom-weave-20220221084934-6550 | | | | | | | start | -p kindnet-20220221084934-6550 | kindnet-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:02:46 UTC | Mon, 21 Feb 2022 09:03:35 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=kindnet --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p kindnet-20220221084934-6550 | kindnet-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:40 UTC | Mon, 21 Feb 2022 09:03:40 UTC | | | pgrep -a kubelet | | | | | | |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| * * ==> Last Start <== * Log file created at: 2022/02/21 09:03:32 Running on machine: ubuntu-20-agent-5 Binary: Built with gc go1.17.7 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0221 09:03:32.117451 442801 out.go:297] Setting OutFile to fd 1 ... I0221 09:03:32.117835 442801 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 09:03:32.117851 442801 out.go:310] Setting ErrFile to fd 2... I0221 09:03:32.117857 442801 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 09:03:32.118132 442801 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/bin I0221 09:03:32.118890 442801 out.go:304] Setting JSON to false I0221 09:03:32.120554 442801 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2766,"bootTime":1645431446,"procs":583,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"} I0221 09:03:32.120646 442801 start.go:122] virtualization: kvm guest I0221 09:03:32.123238 442801 out.go:176] * [enable-default-cni-20220221084933-6550] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64) I0221 09:03:32.124663 442801 out.go:176] - MINIKUBE_LOCATION=13641 I0221 09:03:32.123381 442801 notify.go:193] Checking for updates... I0221 09:03:32.126005 442801 out.go:176] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true I0221 09:03:32.127444 442801 out.go:176] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 09:03:32.128833 442801 out.go:176] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube I0221 09:03:32.130126 442801 out.go:176] - MINIKUBE_BIN=out/minikube-linux-amd64 I0221 09:03:32.130603 442801 config.go:176] Loaded profile config "auto-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:03:32.130689 442801 config.go:176] Loaded profile config "calico-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:03:32.130768 442801 config.go:176] Loaded profile config "kindnet-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:03:32.130810 442801 driver.go:344] Setting default libvirt URI to qemu:///system I0221 09:03:32.183866 442801 docker.go:132] docker version: linux-20.10.12 I0221 09:03:32.184022 442801 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 09:03:32.308357 442801 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-02-21 09:03:32.224294462 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 09:03:32.308480 442801 docker.go:237] overlay module found I0221 09:03:32.310829 442801 out.go:176] * Using the docker driver based on user configuration I0221 09:03:32.310861 442801 start.go:281] selected driver: docker I0221 09:03:32.310868 442801 start.go:798] validating driver "docker" against I0221 09:03:32.310888 442801 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} W0221 09:03:32.310939 442801 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. W0221 09:03:32.310966 442801 out.go:241] ! Your cgroup does not allow setting memory. I0221 09:03:32.312796 442801 out.go:176] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities I0221 09:03:32.313594 442801 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 09:03:32.439745 442801 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-02-21 09:03:32.355381059 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 09:03:32.439886 442801 start_flags.go:288] no existing cluster config was found, will generate one from the flags I0221 09:03:32.440079 442801 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m E0221 09:03:32.440098 442801 start_flags.go:440] Found deprecated --enable-default-cni flag, setting --cni=bridge I0221 09:03:32.440112 442801 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] I0221 09:03:32.440137 442801 cni.go:93] Creating CNI manager for "bridge" I0221 09:03:32.440148 442801 start_flags.go:297] Found "bridge CNI" CNI - setting NetworkPlugin=cni I0221 09:03:32.440157 442801 start_flags.go:302] config: {Name:enable-default-cni-20220221084933-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:enable-default-cni-20220221084933-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 09:03:32.442216 442801 out.go:176] * Starting control plane node enable-default-cni-20220221084933-6550 in cluster enable-default-cni-20220221084933-6550 I0221 09:03:32.442259 442801 cache.go:120] Beginning downloading kic base image for docker with docker I0221 09:03:32.443599 442801 out.go:176] * Pulling base image ... I0221 09:03:32.443647 442801 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 09:03:32.443685 442801 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4 I0221 09:03:32.443699 442801 cache.go:57] Caching tarball of preloaded images I0221 09:03:32.443721 442801 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon I0221 09:03:32.444172 442801 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0221 09:03:32.444195 442801 cache.go:60] Finished verifying existence of preloaded tar for v1.23.4 on docker I0221 09:03:32.444392 442801 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/config.json ... I0221 09:03:32.444424 442801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/config.json: {Name:mkf0bedf552068954fb3058e8f1835930a49f413 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:03:32.506427 442801 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon, skipping pull I0221 09:03:32.506466 442801 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 exists in daemon, skipping load I0221 09:03:32.506482 442801 cache.go:208] Successfully downloaded all kic artifacts I0221 09:03:32.506546 442801 start.go:313] acquiring machines lock for enable-default-cni-20220221084933-6550: {Name:mkbc0432b219bda8857fd7f89775f7bbf9deb037 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:03:32.506717 442801 start.go:317] acquired machines lock for "enable-default-cni-20220221084933-6550" in 142.562µs I0221 09:03:32.506758 442801 start.go:89] Provisioning new machine with config: &{Name:enable-default-cni-20220221084933-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:enable-default-cni-20220221084933-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true} I0221 09:03:32.506882 442801 start.go:126] createHost starting for "" (driver="docker") I0221 09:03:31.857452 421870 node_ready.go:49] node "kindnet-20220221084934-6550" has status "Ready":"True" I0221 09:03:31.857489 421870 node_ready.go:38] duration metric: took 7.507952196s waiting for node "kindnet-20220221084934-6550" to be "Ready" ... I0221 09:03:31.857501 421870 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 09:03:31.869509 421870 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-svjnh" in "kube-system" namespace to be "Ready" ... I0221 09:03:33.384289 421870 pod_ready.go:92] pod "coredns-64897985d-svjnh" in "kube-system" namespace has status "Ready":"True" I0221 09:03:33.384318 421870 pod_ready.go:81] duration metric: took 1.51477231s waiting for pod "coredns-64897985d-svjnh" in "kube-system" namespace to be "Ready" ... I0221 09:03:33.384354 421870 pod_ready.go:78] waiting up to 5m0s for pod "etcd-kindnet-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 09:03:33.388240 421870 pod_ready.go:92] pod "etcd-kindnet-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:03:33.388260 421870 pod_ready.go:81] duration metric: took 3.893952ms waiting for pod "etcd-kindnet-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 09:03:33.388270 421870 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-kindnet-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 09:03:33.391903 421870 pod_ready.go:92] pod "kube-apiserver-kindnet-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:03:33.391931 421870 pod_ready.go:81] duration metric: took 3.653574ms waiting for pod "kube-apiserver-kindnet-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 09:03:33.391943 421870 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-kindnet-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 09:03:33.396201 421870 pod_ready.go:92] pod "kube-controller-manager-kindnet-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:03:33.396225 421870 pod_ready.go:81] duration metric: took 4.273596ms waiting for pod "kube-controller-manager-kindnet-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 09:03:33.396238 421870 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-hvpn5" in "kube-system" namespace to be "Ready" ... I0221 09:03:33.457452 421870 pod_ready.go:92] pod "kube-proxy-hvpn5" in "kube-system" namespace has status "Ready":"True" I0221 09:03:33.457474 421870 pod_ready.go:81] duration metric: took 61.229097ms waiting for pod "kube-proxy-hvpn5" in "kube-system" namespace to be "Ready" ... I0221 09:03:33.457482 421870 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-kindnet-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 09:03:33.858614 421870 pod_ready.go:92] pod "kube-scheduler-kindnet-20220221084934-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:03:33.858644 421870 pod_ready.go:81] duration metric: took 401.155454ms waiting for pod "kube-scheduler-kindnet-20220221084934-6550" in "kube-system" namespace to be "Ready" ... I0221 09:03:33.858660 421870 pod_ready.go:38] duration metric: took 2.001143433s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 09:03:33.858686 421870 api_server.go:51] waiting for apiserver process to appear ... I0221 09:03:33.858736 421870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0221 09:03:33.886848 421870 api_server.go:71] duration metric: took 9.626804383s to wait for apiserver process to appear ... I0221 09:03:33.886874 421870 api_server.go:87] waiting for apiserver healthz status ... I0221 09:03:33.886883 421870 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0221 09:03:33.892372 421870 api_server.go:266] https://192.168.49.2:8443/healthz returned 200: ok I0221 09:03:33.893460 421870 api_server.go:140] control plane version: v1.23.4 I0221 09:03:33.893483 421870 api_server.go:130] duration metric: took 6.603399ms to wait for apiserver health ... I0221 09:03:33.893493 421870 system_pods.go:43] waiting for kube-system pods to appear ... I0221 09:03:34.060826 421870 system_pods.go:59] 8 kube-system pods found I0221 09:03:34.060885 421870 system_pods.go:61] "coredns-64897985d-svjnh" [cd666a7b-1888-4f96-8615-0a625ca7c35a] Running I0221 09:03:34.060894 421870 system_pods.go:61] "etcd-kindnet-20220221084934-6550" [0a0638f5-5420-442a-bb3e-e9b3d10b1ca9] Running I0221 09:03:34.060900 421870 system_pods.go:61] "kindnet-b7vpv" [70703c09-41bc-4c02-9ccf-df45333fbc70] Running I0221 09:03:34.060906 421870 system_pods.go:61] "kube-apiserver-kindnet-20220221084934-6550" [6423a441-9bd2-4e30-a8c1-cd811fe6d38d] Running I0221 09:03:34.060912 421870 system_pods.go:61] "kube-controller-manager-kindnet-20220221084934-6550" [531d4d33-73de-4bcb-a2a5-9c884784ee41] Running I0221 09:03:34.060919 421870 system_pods.go:61] "kube-proxy-hvpn5" [eac36e6a-fd59-49e4-a536-c2aa610984ef] Running I0221 09:03:34.060938 421870 system_pods.go:61] "kube-scheduler-kindnet-20220221084934-6550" [d6e5d38f-b3a5-4b88-baf3-99269615bd6b] Running I0221 09:03:34.060944 421870 system_pods.go:61] "storage-provisioner" [84ae4f8f-baa9-4b02-a1f6-5d9026e71769] Running I0221 09:03:34.060950 421870 system_pods.go:74] duration metric: took 167.447613ms to wait for pod list to return data ... I0221 09:03:34.060958 421870 default_sa.go:34] waiting for default service account to be created ... I0221 09:03:34.323808 421870 default_sa.go:45] found service account: "default" I0221 09:03:34.323844 421870 default_sa.go:55] duration metric: took 262.878661ms for default service account to be created ... I0221 09:03:34.323854 421870 system_pods.go:116] waiting for k8s-apps to be running ... I0221 09:03:34.547698 421870 system_pods.go:86] 8 kube-system pods found I0221 09:03:34.547726 421870 system_pods.go:89] "coredns-64897985d-svjnh" [cd666a7b-1888-4f96-8615-0a625ca7c35a] Running I0221 09:03:34.547732 421870 system_pods.go:89] "etcd-kindnet-20220221084934-6550" [0a0638f5-5420-442a-bb3e-e9b3d10b1ca9] Running I0221 09:03:34.547736 421870 system_pods.go:89] "kindnet-b7vpv" [70703c09-41bc-4c02-9ccf-df45333fbc70] Running I0221 09:03:34.547743 421870 system_pods.go:89] "kube-apiserver-kindnet-20220221084934-6550" [6423a441-9bd2-4e30-a8c1-cd811fe6d38d] Running I0221 09:03:34.547751 421870 system_pods.go:89] "kube-controller-manager-kindnet-20220221084934-6550" [531d4d33-73de-4bcb-a2a5-9c884784ee41] Running I0221 09:03:34.547757 421870 system_pods.go:89] "kube-proxy-hvpn5" [eac36e6a-fd59-49e4-a536-c2aa610984ef] Running I0221 09:03:34.547763 421870 system_pods.go:89] "kube-scheduler-kindnet-20220221084934-6550" [d6e5d38f-b3a5-4b88-baf3-99269615bd6b] Running I0221 09:03:34.547774 421870 system_pods.go:89] "storage-provisioner" [84ae4f8f-baa9-4b02-a1f6-5d9026e71769] Running I0221 09:03:34.547786 421870 system_pods.go:126] duration metric: took 223.925846ms to wait for k8s-apps to be running ... I0221 09:03:34.547799 421870 system_svc.go:44] waiting for kubelet service to be running .... I0221 09:03:34.547838 421870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0221 09:03:34.559249 421870 system_svc.go:56] duration metric: took 11.445985ms WaitForService to wait for kubelet. I0221 09:03:34.559274 421870 kubeadm.go:548] duration metric: took 10.299235376s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ... I0221 09:03:34.559291 421870 node_conditions.go:102] verifying NodePressure condition ... I0221 09:03:34.978164 421870 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki I0221 09:03:34.978194 421870 node_conditions.go:123] node cpu capacity is 8 I0221 09:03:34.978207 421870 node_conditions.go:105] duration metric: took 418.912308ms to run NodePressure ... I0221 09:03:34.978216 421870 start.go:213] waiting for startup goroutines ... I0221 09:03:35.015542 421870 start.go:496] kubectl: 1.23.4, cluster: 1.23.4 (minor skew: 0) I0221 09:03:35.019140 421870 out.go:176] * Done! kubectl is now configured to use "kindnet-20220221084934-6550" cluster and "default" namespace by default I0221 09:03:32.509100 442801 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ... I0221 09:03:32.509428 442801 start.go:160] libmachine.API.Create for "enable-default-cni-20220221084933-6550" (driver="docker") I0221 09:03:32.509468 442801 client.go:168] LocalClient.Create starting I0221 09:03:32.509561 442801 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem I0221 09:03:32.509601 442801 main.go:130] libmachine: Decoding PEM data... I0221 09:03:32.509626 442801 main.go:130] libmachine: Parsing certificate... I0221 09:03:32.509694 442801 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem I0221 09:03:32.509715 442801 main.go:130] libmachine: Decoding PEM data... I0221 09:03:32.509741 442801 main.go:130] libmachine: Parsing certificate... I0221 09:03:32.510145 442801 cli_runner.go:133] Run: docker network inspect enable-default-cni-20220221084933-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0221 09:03:32.553881 442801 cli_runner.go:180] docker network inspect enable-default-cni-20220221084933-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0221 09:03:32.553962 442801 network_create.go:254] running [docker network inspect enable-default-cni-20220221084933-6550] to gather additional debugging logs... I0221 09:03:32.553988 442801 cli_runner.go:133] Run: docker network inspect enable-default-cni-20220221084933-6550 W0221 09:03:32.600997 442801 cli_runner.go:180] docker network inspect enable-default-cni-20220221084933-6550 returned with exit code 1 I0221 09:03:32.601052 442801 network_create.go:257] error running [docker network inspect enable-default-cni-20220221084933-6550]: docker network inspect enable-default-cni-20220221084933-6550: exit status 1 stdout: [] stderr: Error: No such network: enable-default-cni-20220221084933-6550 I0221 09:03:32.601067 442801 network_create.go:259] output of [docker network inspect enable-default-cni-20220221084933-6550]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: enable-default-cni-20220221084933-6550 ** /stderr ** I0221 09:03:32.601145 442801 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 09:03:32.649708 442801 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-5d96ab4d6b1a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:58:0b:cb:43}} I0221 09:03:32.651477 442801 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc00060ec80] misses:0} I0221 09:03:32.651529 442801 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}} I0221 09:03:32.651564 442801 network_create.go:106] attempt to create docker network enable-default-cni-20220221084933-6550 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ... I0221 09:03:32.651625 442801 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220221084933-6550 I0221 09:03:32.737383 442801 network_create.go:90] docker network enable-default-cni-20220221084933-6550 192.168.58.0/24 created I0221 09:03:32.737424 442801 kic.go:106] calculated static IP "192.168.58.2" for the "enable-default-cni-20220221084933-6550" container I0221 09:03:32.737487 442801 cli_runner.go:133] Run: docker ps -a --format {{.Names}} I0221 09:03:32.783684 442801 cli_runner.go:133] Run: docker volume create enable-default-cni-20220221084933-6550 --label name.minikube.sigs.k8s.io=enable-default-cni-20220221084933-6550 --label created_by.minikube.sigs.k8s.io=true I0221 09:03:32.821012 442801 oci.go:102] Successfully created a docker volume enable-default-cni-20220221084933-6550 I0221 09:03:32.821103 442801 cli_runner.go:133] Run: docker run --rm --name enable-default-cni-20220221084933-6550-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20220221084933-6550 --entrypoint /usr/bin/test -v enable-default-cni-20220221084933-6550:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -d /var/lib I0221 09:03:33.446837 442801 oci.go:106] Successfully prepared a docker volume enable-default-cni-20220221084933-6550 I0221 09:03:33.446882 442801 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 09:03:33.446898 442801 kic.go:179] Starting extracting preloaded images to volume ... I0221 09:03:33.446952 442801 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-20220221084933-6550:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir I0221 09:03:39.157671 442801 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-20220221084933-6550:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir: (5.710682586s) I0221 09:03:39.157708 442801 kic.go:188] duration metric: took 5.710806 seconds to extract preloaded images to volume W0221 09:03:39.157755 442801 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted. W0221 09:03:39.157770 442801 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. I0221 09:03:39.157823 442801 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'" I0221 09:03:39.287910 442801 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-20220221084933-6550 --name enable-default-cni-20220221084933-6550 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20220221084933-6550 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-20220221084933-6550 --network enable-default-cni-20220221084933-6550 --ip 192.168.58.2 --volume enable-default-cni-20220221084933-6550:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 I0221 09:03:39.785302 442801 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220221084933-6550 --format={{.State.Running}} I0221 09:03:39.825814 442801 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220221084933-6550 --format={{.State.Status}} I0221 09:03:39.866107 442801 cli_runner.go:133] Run: docker exec enable-default-cni-20220221084933-6550 stat /var/lib/dpkg/alternatives/iptables I0221 09:03:39.933887 442801 oci.go:281] the created container "enable-default-cni-20220221084933-6550" has a running status. I0221 09:03:39.933924 442801 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/enable-default-cni-20220221084933-6550/id_rsa... I0221 09:03:40.203939 442801 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/enable-default-cni-20220221084933-6550/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0221 09:03:40.305594 442801 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220221084933-6550 --format={{.State.Status}} I0221 09:03:40.345149 442801 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0221 09:03:40.345176 442801 kic_runner.go:114] Args: [docker exec --privileged enable-default-cni-20220221084933-6550 chown docker:docker /home/docker/.ssh/authorized_keys] I0221 09:03:40.447477 442801 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220221084933-6550 --format={{.State.Status}} I0221 09:03:40.486059 442801 machine.go:88] provisioning docker machine ... I0221 09:03:40.486095 442801 ubuntu.go:169] provisioning hostname "enable-default-cni-20220221084933-6550" I0221 09:03:40.486163 442801 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220221084933-6550 I0221 09:03:40.528341 442801 main.go:130] libmachine: Using SSH client type: native I0221 09:03:40.528564 442801 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49389 } I0221 09:03:40.528587 442801 main.go:130] libmachine: About to run SSH command: sudo hostname enable-default-cni-20220221084933-6550 && echo "enable-default-cni-20220221084933-6550" | sudo tee /etc/hostname I0221 09:03:40.672510 442801 main.go:130] libmachine: SSH cmd err, output: : enable-default-cni-20220221084933-6550 I0221 09:03:40.672575 442801 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220221084933-6550 I0221 09:03:40.714009 442801 main.go:130] libmachine: Using SSH client type: native I0221 09:03:40.714173 442801 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49389 } I0221 09:03:40.714203 442801 main.go:130] libmachine: About to run SSH command: if ! grep -xq '.*\senable-default-cni-20220221084933-6550' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-20220221084933-6550/g' /etc/hosts; else echo '127.0.1.1 enable-default-cni-20220221084933-6550' | sudo tee -a /etc/hosts; fi fi I0221 09:03:40.839246 442801 main.go:130] libmachine: SSH cmd err, output: : I0221 09:03:40.839282 442801 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube} I0221 09:03:40.839317 442801 ubuntu.go:177] setting up certificates I0221 09:03:40.839328 442801 provision.go:83] configureAuth start I0221 09:03:40.839376 442801 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-20220221084933-6550 I0221 09:03:40.878295 442801 provision.go:138] copyHostCerts I0221 09:03:40.878356 442801 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem, removing ... I0221 09:03:40.878363 442801 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem I0221 09:03:40.878423 442801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem (1082 bytes) I0221 09:03:40.878507 442801 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem, removing ... I0221 09:03:40.878522 442801 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem I0221 09:03:40.878544 442801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem (1123 bytes) I0221 09:03:40.878603 442801 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem, removing ... I0221 09:03:40.878613 442801 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem I0221 09:03:40.878632 442801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem (1675 bytes) I0221 09:03:40.878693 442801 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-20220221084933-6550 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube enable-default-cni-20220221084933-6550] I0221 09:03:41.118770 442801 provision.go:172] copyRemoteCerts I0221 09:03:41.118849 442801 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0221 09:03:41.118898 442801 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220221084933-6550 I0221 09:03:41.165106 442801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49389 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/enable-default-cni-20220221084933-6550/id_rsa Username:docker} I0221 09:03:41.259401 442801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0221 09:03:41.283168 442801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes) I0221 09:03:41.352800 442801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem --> /etc/docker/server.pem (1285 bytes) I0221 09:03:41.372972 442801 provision.go:86] duration metric: configureAuth took 533.625844ms I0221 09:03:41.373005 442801 ubuntu.go:193] setting minikube options for container-runtime I0221 09:03:41.373216 442801 config.go:176] Loaded profile config "enable-default-cni-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:03:41.373276 442801 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220221084933-6550 I0221 09:03:41.411159 442801 main.go:130] libmachine: Using SSH client type: native I0221 09:03:41.411354 442801 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49389 } I0221 09:03:41.411372 442801 main.go:130] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0221 09:03:41.539301 442801 main.go:130] libmachine: SSH cmd err, output: : overlay I0221 09:03:41.539328 442801 ubuntu.go:71] root file system type: overlay I0221 09:03:41.539501 442801 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0221 09:03:41.539561 442801 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220221084933-6550 I0221 09:03:41.577090 442801 main.go:130] libmachine: Using SSH client type: native I0221 09:03:41.577270 442801 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49389 } I0221 09:03:41.577373 442801 main.go:130] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0221 09:03:41.719982 442801 main.go:130] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0221 09:03:41.720076 442801 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220221084933-6550 I0221 09:03:41.761362 442801 main.go:130] libmachine: Using SSH client type: native I0221 09:03:41.761534 442801 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49389 } I0221 09:03:41.761562 442801 main.go:130] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0221 09:03:42.457894 442801 main.go:130] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2021-12-13 11:43:42.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2022-02-21 09:03:41.712293296 +0000 @@ -1,30 +1,32 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com +BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target -Requires=docker.socket containerd.service +Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always - -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 +Restart=on-failure -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +34,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0221 09:03:42.457928 442801 machine.go:91] provisioned docker machine in 1.971846976s I0221 09:03:42.457939 442801 client.go:171] LocalClient.Create took 9.948461628s I0221 09:03:42.457949 442801 start.go:168] duration metric: libmachine.API.Create for "enable-default-cni-20220221084933-6550" took 9.948522593s I0221 09:03:42.457958 442801 start.go:267] post-start starting for "enable-default-cni-20220221084933-6550" (driver="docker") I0221 09:03:42.457964 442801 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0221 09:03:42.458031 442801 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0221 09:03:42.458081 442801 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220221084933-6550 I0221 09:03:42.500407 442801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49389 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/enable-default-cni-20220221084933-6550/id_rsa Username:docker} I0221 09:03:42.591041 442801 ssh_runner.go:195] Run: cat /etc/os-release I0221 09:03:42.593837 442801 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0221 09:03:42.593864 442801 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0221 09:03:42.593877 442801 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0221 09:03:42.593884 442801 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0221 09:03:42.593900 442801 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/addons for local assets ... I0221 09:03:42.593960 442801 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files for local assets ... I0221 09:03:42.594044 442801 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem -> 65502.pem in /etc/ssl/certs I0221 09:03:42.594142 442801 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs I0221 09:03:42.600714 442801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /etc/ssl/certs/65502.pem (1708 bytes) I0221 09:03:42.628017 442801 start.go:270] post-start completed in 170.038678ms I0221 09:03:42.628418 442801 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-20220221084933-6550 I0221 09:03:42.675220 442801 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/config.json ... I0221 09:03:42.675482 442801 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0221 09:03:42.675527 442801 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220221084933-6550 I0221 09:03:42.716687 442801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49389 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/enable-default-cni-20220221084933-6550/id_rsa Username:docker} I0221 09:03:42.808692 442801 start.go:129] duration metric: createHost completed in 10.301791872s I0221 09:03:42.808724 442801 start.go:80] releasing machines lock for "enable-default-cni-20220221084933-6550", held for 10.301985759s I0221 09:03:42.808816 442801 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-20220221084933-6550 I0221 09:03:42.857057 442801 ssh_runner.go:195] Run: systemctl --version I0221 09:03:42.857100 442801 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220221084933-6550 I0221 09:03:42.857099 442801 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/ I0221 09:03:42.857145 442801 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220221084933-6550 I0221 09:03:42.897241 442801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49389 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/enable-default-cni-20220221084933-6550/id_rsa Username:docker} I0221 09:03:42.900654 442801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49389 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/enable-default-cni-20220221084933-6550/id_rsa Username:docker} I0221 09:03:43.147753 442801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0221 09:03:43.159383 442801 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 09:03:43.176145 442801 cruntime.go:272] skipping containerd shutdown because we are bound to it I0221 09:03:43.176217 442801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0221 09:03:43.186740 442801 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0221 09:03:43.200685 442801 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0221 09:03:43.301648 442801 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0221 09:03:43.402146 442801 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 09:03:43.414491 442801 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0221 09:03:43.527390 442801 ssh_runner.go:195] Run: sudo systemctl start docker I0221 09:03:43.539354 442801 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 09:03:43.584645 442801 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 09:03:43.653516 442801 out.go:203] * Preparing Kubernetes v1.23.4 on Docker 20.10.12 ... I0221 09:03:43.653610 442801 cli_runner.go:133] Run: docker network inspect enable-default-cni-20220221084933-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 09:03:43.696151 442801 ssh_runner.go:195] Run: grep 192.168.58.1 host.minikube.internal$ /etc/hosts I0221 09:03:43.700610 442801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 09:03:44.827562 223679 system_pods.go:86] 9 kube-system pods found I0221 09:03:44.827595 223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers]) I0221 09:03:44.827608 223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node]) I0221 09:03:44.827618 223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:03:44.827630 223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running I0221 09:03:44.827637 223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running I0221 09:03:44.827644 223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running I0221 09:03:44.827654 223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running I0221 09:03:44.827659 223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running I0221 09:03:44.827674 223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:03:44.830160 223679 out.go:176] W0221 09:03:44.830324 223679 out.go:241] X Exiting due to GUEST_START: wait 5m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns W0221 09:03:44.830341 223679 out.go:241] * W0221 09:03:44.831471 223679 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮ │ │ │ * If the above advice does not help, please let us know: │ │ https://github.com/kubernetes/minikube/issues/new/choose │ │ │ │ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │ │ │ ╰─────────────────────────────────────────────────────────────────────────────────────────────╯ * * ==> Docker <== * -- Logs begin at Mon 2022-02-21 08:54:40 UTC, end at Mon 2022-02-21 09:03:46 UTC. -- Feb 21 09:03:27 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:27.616092005Z" level=info msg="ignoring event" container=4f38b9bacf0339ab30f3436eb4e78170e83b897fd87d4bddd553ef838a6901a2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:28 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:28.688701203Z" level=info msg="ignoring event" container=266cb1873eea9e5440e92a2b3d8794297cc19242738d3fabd7ee7b539ff28661 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:28 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:28.760163062Z" level=info msg="ignoring event" container=d91975382968c4f8c92ab1dab5eb8c09e5acbe64aec249eb478bb9f9eca510d3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:29 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:29.770733170Z" level=info msg="ignoring event" container=f870ea5d901515d7e9c45252deba9281ecd6249fe43d402c850863b52502d649 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:29 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:29.770797352Z" level=info msg="ignoring event" container=32ec078ce49383855c916443bb37868add2bff7fb49f40c8b68dd3c61ce3c523 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:30 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:30.722591239Z" level=info msg="ignoring event" container=c65e94d51288f1800288c9dc15b69a2625681562db53d70f88c668e7c6cd1ab4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:30 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:30.728235231Z" level=info msg="ignoring event" container=947ed2826f896f07877b3d66696c29359a8c8e4b491bb70f937521fa4b0470a9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:31 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:31.833362930Z" level=info msg="ignoring event" container=10c07c311cd6727e3d87d29eafaf85b6ad8c002e94e6e24e19fc6e2229cce2d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:31 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:31.859950749Z" level=info msg="ignoring event" container=56ad2ebf8044a4657ba0e43a1232e327ff77629650e7d34cf462eb7dfdda4115 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:32 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:32.838635969Z" level=info msg="ignoring event" container=4d196a793a0e212fb92fed4479917ec76c27131fa9df2039a63f3dc1531b8e4d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:32 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:32.856575938Z" level=info msg="ignoring event" container=5be8782476baf29d3c0883b4c4fd66ccd7a5744fea303ba80a6d161d1a4b0a8d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:33 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:33.957320035Z" level=info msg="ignoring event" container=af1a2681d7777d89e805f17b60b2a5fa92731bbca52b61d26c45ae5f7883037d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:33 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:33.960477494Z" level=info msg="ignoring event" container=1c68c7348674e640b144fbe091fed64f6f50834f005826f61db52b8a242b6140 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:39 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:39.715963092Z" level=info msg="ignoring event" container=47c9ca5c7ca2166ecd6637c39266e1f1107cc861529675c315c9f52f427ad2f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:39 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:39.726593389Z" level=info msg="ignoring event" container=55974a0b5221147225025d52816b8fc4db03bc2ed52233c8229dce191b6b34c9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:41 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:41.172528995Z" level=info msg="ignoring event" container=548e41e757130a87f329572331aad3a75812426cc5e189abb4deb9252cbe494b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:41 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:41.230879350Z" level=info msg="ignoring event" container=8cb09ad5b28be399b234008b66bfa44f3fcbfb05983336097526e19abe8fa42d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:42 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:42.253693773Z" level=info msg="ignoring event" container=ac0166177349ff0cee57998c7d4a25e26b658410d3dfdfc78412638245b9b726 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:42 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:42.255202569Z" level=info msg="ignoring event" container=8c15340c79ad79476c9efeb87d8be294d9f1d33c6175fbe853c798b97608f995 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:43 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:43.443283756Z" level=info msg="ignoring event" container=4cd175e1b597000f773c29015e31a6bc8122156dc74fdb3c51789eedf01bc86e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:43 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:43.460207969Z" level=info msg="ignoring event" container=933b20157b6cdd68fc037a26bbbf3d2b151e1dd24fe666b8f25de2a411ba828b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:44 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:44.441722301Z" level=info msg="ignoring event" container=fa0aeb35b4bd828626ec602baed3d034a8cff63c353b49c2b12e8c653868cd90 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:44 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:44.462639442Z" level=info msg="ignoring event" container=e9b9d78c45ad7e593cce6a33a8e269837889b17c4383c750379b5d03698389ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:45 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:45.663133371Z" level=info msg="ignoring event" container=c0c2b655a97ae019aa88cc9432fc59faccab48fe8673445b88480e27e6e1d1f9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:45 calico-20220221084934-6550 dockerd[456]: time="2022-02-21T09:03:45.717696781Z" level=info msg="ignoring event" container=50a4b9c1073c01deb2038992f9533c976151df62452a9f2a2982d6f1d3d51473 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID dbdc8cd6a1ce1 5ef66b403f4f0 2 minutes ago Exited calico-node 5 c0d320ac30a9b 6d88e003ae4d3 6e38f40d628db 3 minutes ago Exited storage-provisioner 5 5d2b5639f06c5 1bc1aa4df9f17 calico/pod2daemon-flexvol@sha256:c17e3e9871682bed00bfd33f8d6f00db1d1a126034a25bf5380355978e0c548d 8 minutes ago Exited flexvol-driver 0 c0d320ac30a9b 01afaa16a59b8 4945b742b8e66 8 minutes ago Exited install-cni 0 c0d320ac30a9b 3d508836fbe39 calico/cni@sha256:9906e2cca8006e1fe9fc3f358a3a06da6253afdd6fad05d594e884e8298ffe1d 8 minutes ago Exited upgrade-ipam 0 c0d320ac30a9b 449cc37a92fe7 2114245ec4d6b 8 minutes ago Running kube-proxy 0 a0f0400a1b94e f012d1d45e221 aceacb6244f9f 8 minutes ago Running kube-scheduler 0 cb8998c81feab 96cc9489b33e5 25f8c7f3da61c 8 minutes ago Running etcd 0 566db401d5d43 cddc9ef001f2d 25444908517a5 8 minutes ago Running kube-controller-manager 0 aa8fb7fa6d1d3 5b808a7ef4a26 62930710c9634 8 minutes ago Running kube-apiserver 0 169b39b50a62e * * ==> describe nodes <== * Name: calico-20220221084934-6550 Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=calico-20220221084934-6550 kubernetes.io/os=linux minikube.k8s.io/commit=9a4c39b14752448ee8b5da7f8bb397c2a16d9ea9 minikube.k8s.io/name=calico-20220221084934-6550 minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2022_02_21T08_55_04_0700 minikube.k8s.io/version=v1.25.1 node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 21 Feb 2022 08:55:01 +0000 Taints: Unschedulable: false Lease: HolderIdentity: calico-20220221084934-6550 AcquireTime: RenewTime: Mon, 21 Feb 2022 09:03:44 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Mon, 21 Feb 2022 09:01:12 +0000 Mon, 21 Feb 2022 08:55:01 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Mon, 21 Feb 2022 09:01:12 +0000 Mon, 21 Feb 2022 08:55:01 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Mon, 21 Feb 2022 09:01:12 +0000 Mon, 21 Feb 2022 08:55:01 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Mon, 21 Feb 2022 09:01:12 +0000 Mon, 21 Feb 2022 08:55:13 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.67.2 Hostname: calico-20220221084934-6550 Capacity: cpu: 8 ephemeral-storage: 304695084Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32874648Ki pods: 110 Allocatable: cpu: 8 ephemeral-storage: 304695084Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32874648Ki pods: 110 System Info: Machine ID: b6a262faae404a5db719705fd34b5c8b System UUID: b97b2c97-fa91-4271-b3ba-befe7b7ea324 Boot ID: 36f9c729-2a96-4807-bb74-314dc2113999 Kernel Version: 5.11.0-1029-gcp OS Image: Ubuntu 20.04.2 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.12 Kubelet Version: v1.23.4 Kube-Proxy Version: v1.23.4 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (9 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- kube-system calico-kube-controllers-8594699699-ftdtm 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m32s kube-system calico-node-zcdj6 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m33s kube-system coredns-64897985d-r75jc 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 8m32s kube-system etcd-calico-20220221084934-6550 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 8m43s kube-system kube-apiserver-calico-20220221084934-6550 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m43s kube-system kube-controller-manager-calico-20220221084934-6550 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m43s kube-system kube-proxy-kwcvx 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m33s kube-system kube-scheduler-calico-20220221084934-6550 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m43s kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m30s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1 (12%!)(MISSING) 0 (0%!)(MISSING) memory 170Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 8m31s kube-proxy Normal Starting 8m58s kubelet Starting kubelet. Normal NodeAllocatableEnforced 8m58s kubelet Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 8m57s (x4 over 8m58s) kubelet Node calico-20220221084934-6550 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 8m57s (x3 over 8m58s) kubelet Node calico-20220221084934-6550 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 8m57s (x3 over 8m58s) kubelet Node calico-20220221084934-6550 status is now: NodeHasSufficientPID Normal NodeHasSufficientMemory 8m43s kubelet Node calico-20220221084934-6550 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 8m43s kubelet Node calico-20220221084934-6550 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 8m43s kubelet Node calico-20220221084934-6550 status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 8m43s kubelet Updated Node Allocatable limit across pods Normal Starting 8m43s kubelet Starting kubelet. Normal NodeReady 8m33s kubelet Node calico-20220221084934-6550 status is now: NodeReady * * ==> dmesg <== * [ +0.000007] ll header: 00000000: ff ff ff ff ff ff da 80 7d 07 f0 ca 08 06 [ +2.561210] IPv4: martian source 10.85.0.159 from 10.85.0.159, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 23 e1 c4 83 2c 08 06 [ +2.615653] IPv4: martian source 10.85.0.160 from 10.85.0.160, on dev eth0 [ +0.000005] ll header: 00000000: ff ff ff ff ff ff 8e 64 41 7f 5e 31 08 06 [ +2.733452] IPv4: martian source 10.85.0.161 from 10.85.0.161, on dev eth0 [ +0.000008] ll header: 00000000: ff ff ff ff ff ff da fc d1 c9 f2 2a 08 06 [ +2.883194] IPv4: martian source 10.85.0.162 from 10.85.0.162, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 1a 5e d5 29 ea a8 08 06 [ +2.455339] IPv4: martian source 10.85.0.163 from 10.85.0.163, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff 9e 50 c8 60 43 de 08 06 [ +2.674144] IPv4: martian source 10.85.0.164 from 10.85.0.164, on dev eth0 [ +0.000008] ll header: 00000000: ff ff ff ff ff ff ae b8 d8 5c 06 86 08 06 [ +2.173451] IPv4: martian source 10.85.0.165 from 10.85.0.165, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff b6 23 71 a2 17 13 08 06 [ +3.191430] IPv4: martian source 10.85.0.166 from 10.85.0.166, on dev eth0 [ +0.000008] ll header: 00000000: ff ff ff ff ff ff fa ee 02 4a fe dc 08 06 [ +3.010319] IPv4: martian source 10.85.0.167 from 10.85.0.167, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff be 1f 49 7a 27 ae 08 06 [ +3.012859] IPv4: martian source 10.85.0.168 from 10.85.0.168, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff ca 7f b6 f0 26 29 08 06 [ +4.014892] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth3bf823e9 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 3a 4d b5 7d b0 08 06 [ +8.773962] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth9d08a992 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff ae fa 77 2a a4 f4 08 06 * * ==> etcd [96cc9489b33e] <== * {"level":"info","ts":"2022-02-21T08:54:55.335Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2022-02-21T08:54:55.336Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]} {"level":"info","ts":"2022-02-21T08:54:55.336Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"} {"level":"info","ts":"2022-02-21T08:54:55.336Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.67.2:2380"} {"level":"info","ts":"2022-02-21T08:54:55.336Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.67.2:2380"} {"level":"info","ts":"2022-02-21T08:54:56.321Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"} {"level":"info","ts":"2022-02-21T08:54:56.321Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"} {"level":"info","ts":"2022-02-21T08:54:56.321Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"} {"level":"info","ts":"2022-02-21T08:54:56.321Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"} {"level":"info","ts":"2022-02-21T08:54:56.321Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"} {"level":"info","ts":"2022-02-21T08:54:56.321Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"} {"level":"info","ts":"2022-02-21T08:54:56.321Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"} {"level":"info","ts":"2022-02-21T08:54:56.321Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:calico-20220221084934-6550 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"} {"level":"info","ts":"2022-02-21T08:54:56.321Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2022-02-21T08:54:56.321Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T08:54:56.321Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2022-02-21T08:54:56.321Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"} {"level":"info","ts":"2022-02-21T08:54:56.321Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"} {"level":"info","ts":"2022-02-21T08:54:56.323Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"} {"level":"info","ts":"2022-02-21T08:54:56.323Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T08:54:56.323Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T08:54:56.323Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T08:54:56.326Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"} {"level":"warn","ts":"2022-02-21T08:55:39.983Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"273.653772ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:422"} {"level":"info","ts":"2022-02-21T08:55:39.983Z","caller":"traceutil/trace.go:171","msg":"trace[311069928] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:619; }","duration":"273.793799ms","start":"2022-02-21T08:55:39.709Z","end":"2022-02-21T08:55:39.983Z","steps":["trace[311069928] 'agreement among raft nodes before linearized reading' (duration: 87.453612ms)","trace[311069928] 'range keys from in-memory index tree' (duration: 186.156694ms)"],"step_count":2} * * ==> kernel <== * 09:03:47 up 46 min, 0 users, load average: 4.79, 4.56, 3.65 Linux calico-20220221084934-6550 5.11.0-1029-gcp #33~20.04.3-Ubuntu SMP Tue Jan 18 12:03:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.2 LTS" * * ==> kube-apiserver [5b808a7ef4a2] <== * I0221 08:54:58.302165 1 shared_informer.go:247] Caches are synced for node_authorizer I0221 08:54:58.302218 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0221 08:54:58.302232 1 cache.go:39] Caches are synced for autoregister controller I0221 08:54:58.302298 1 apf_controller.go:322] Running API Priority and Fairness config worker I0221 08:54:58.302435 1 cache.go:39] Caches are synced for AvailableConditionController controller I0221 08:54:59.057676 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0221 08:54:59.062371 1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000 I0221 08:54:59.065126 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0221 08:54:59.065494 1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000 I0221 08:54:59.065514 1 storage_scheduling.go:109] all system priority classes are created successfully or already exist. I0221 08:54:59.452494 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0221 08:54:59.482209 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0221 08:54:59.624710 1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1] W0221 08:54:59.629617 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.67.2] I0221 08:54:59.630542 1 controller.go:611] quota admission added evaluator for: endpoints I0221 08:54:59.634181 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io I0221 08:55:00.240484 1 controller.go:611] quota admission added evaluator for: serviceaccounts I0221 08:55:02.953918 1 controller.go:611] quota admission added evaluator for: deployments.apps I0221 08:55:02.962144 1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10] I0221 08:55:02.971590 1 controller.go:611] quota admission added evaluator for: daemonsets.apps I0221 08:55:03.150647 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io I0221 08:55:04.749784 1 controller.go:611] quota admission added evaluator for: poddisruptionbudgets.policy I0221 08:55:13.694331 1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps I0221 08:55:13.794266 1 controller.go:611] quota admission added evaluator for: replicasets.apps I0221 08:55:15.516406 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io * * ==> kube-controller-manager [cddc9ef001f2] <== * W0221 08:55:23.405231 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: "" E0221 08:55:23.405240 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" E0221 08:55:23.405420 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input W0221 08:55:23.405433 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: "" E0221 08:55:23.405444 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" E0221 08:55:23.405712 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input W0221 08:55:23.405735 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: "" E0221 08:55:23.405759 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" E0221 08:55:23.406053 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input W0221 08:55:23.406068 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: "" E0221 08:55:23.406085 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" E0221 08:55:23.406285 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input W0221 08:55:23.406298 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: "" E0221 08:55:23.406310 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" E0221 08:55:23.406555 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input W0221 08:55:23.406568 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: "" E0221 08:55:23.406581 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" E0221 08:55:23.406798 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input W0221 08:55:23.406810 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: "" E0221 08:55:23.406826 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" E0221 08:55:23.407275 1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input W0221 08:55:23.407296 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: "" E0221 08:55:23.407313 1 plugins.go:752] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" I0221 08:55:43.750560 1 shared_informer.go:240] Waiting for caches to sync for garbage collector I0221 08:55:44.551239 1 shared_informer.go:247] Caches are synced for garbage collector * * ==> kube-proxy [449cc37a92fe] <== * I0221 08:55:15.461389 1 node.go:163] Successfully retrieved node IP: 192.168.67.2 I0221 08:55:15.461460 1 server_others.go:138] "Detected node IP" address="192.168.67.2" I0221 08:55:15.461489 1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode="" I0221 08:55:15.512835 1 server_others.go:206] "Using iptables Proxier" I0221 08:55:15.512880 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4 I0221 08:55:15.512893 1 server_others.go:214] "Creating dualStackProxier for iptables" I0221 08:55:15.512916 1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6" I0221 08:55:15.513378 1 server.go:656] "Version info" version="v1.23.4" I0221 08:55:15.513997 1 config.go:317] "Starting service config controller" I0221 08:55:15.514024 1 shared_informer.go:240] Waiting for caches to sync for service config I0221 08:55:15.514116 1 config.go:226] "Starting endpoint slice config controller" I0221 08:55:15.514123 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I0221 08:55:15.614579 1 shared_informer.go:247] Caches are synced for endpoint slice config I0221 08:55:15.614643 1 shared_informer.go:247] Caches are synced for service config * * ==> kube-scheduler [f012d1d45e22] <== * W0221 08:54:58.217986 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0221 08:54:58.218793 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0221 08:54:58.217997 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0221 08:54:58.218812 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0221 08:54:58.218055 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0221 08:54:58.218846 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0221 08:54:58.218067 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0221 08:54:58.218875 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0221 08:54:58.218174 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0221 08:54:58.218888 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0221 08:54:58.218260 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0221 08:54:58.218900 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0221 08:54:58.218378 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0221 08:54:58.218922 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0221 08:54:58.218504 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0221 08:54:58.218933 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0221 08:54:58.218946 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0221 08:54:58.218978 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0221 08:54:59.168312 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0221 08:54:59.168351 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0221 08:54:59.207809 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0221 08:54:59.207897 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0221 08:54:59.246521 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0221 08:54:59.246561 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope I0221 08:54:59.714113 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Mon 2022-02-21 08:54:40 UTC, end at Mon 2022-02-21 09:03:47 UTC. -- Feb 21 09:03:45 calico-20220221084934-6550 kubelet[2000]: E0221 09:03:45.710223 2000 kuberuntime_manager.go:832] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"c0c2b655a97ae019aa88cc9432fc59faccab48fe8673445b88480e27e6e1d1f9\" network for pod \"coredns-64897985d-r75jc\": networkPlugin cni failed to set up pod \"coredns-64897985d-r75jc_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-64897985d-r75jc" Feb 21 09:03:45 calico-20220221084934-6550 kubelet[2000]: E0221 09:03:45.710313 2000 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-64897985d-r75jc_kube-system(8b61f5f5-e695-42e1-8247-797a3d90eef7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-64897985d-r75jc_kube-system(8b61f5f5-e695-42e1-8247-797a3d90eef7)\\\": rpc error: code = Unknown desc = failed to set up sandbox container \\\"c0c2b655a97ae019aa88cc9432fc59faccab48fe8673445b88480e27e6e1d1f9\\\" network for pod \\\"coredns-64897985d-r75jc\\\": networkPlugin cni failed to set up pod \\\"coredns-64897985d-r75jc_kube-system\\\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-64897985d-r75jc" podUID=8b61f5f5-e695-42e1-8247-797a3d90eef7 Feb 21 09:03:45 calico-20220221084934-6550 kubelet[2000]: E0221 09:03:45.735736 2000 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"50a4b9c1073c01deb2038992f9533c976151df62452a9f2a2982d6f1d3d51473\" network for pod \"calico-kube-controllers-8594699699-ftdtm\": networkPlugin cni failed to set up pod \"calico-kube-controllers-8594699699-ftdtm_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 21 09:03:45 calico-20220221084934-6550 kubelet[2000]: E0221 09:03:45.735804 2000 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to set up sandbox container \"50a4b9c1073c01deb2038992f9533c976151df62452a9f2a2982d6f1d3d51473\" network for pod \"calico-kube-controllers-8594699699-ftdtm\": networkPlugin cni failed to set up pod \"calico-kube-controllers-8594699699-ftdtm_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/calico-kube-controllers-8594699699-ftdtm" Feb 21 09:03:45 calico-20220221084934-6550 kubelet[2000]: E0221 09:03:45.735828 2000 kuberuntime_manager.go:832] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"50a4b9c1073c01deb2038992f9533c976151df62452a9f2a2982d6f1d3d51473\" network for pod \"calico-kube-controllers-8594699699-ftdtm\": networkPlugin cni failed to set up pod \"calico-kube-controllers-8594699699-ftdtm_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/calico-kube-controllers-8594699699-ftdtm" Feb 21 09:03:45 calico-20220221084934-6550 kubelet[2000]: E0221 09:03:45.735892 2000 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8594699699-ftdtm_kube-system(198a6a8f-4d1b-44fc-9a43-3166e582db73)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8594699699-ftdtm_kube-system(198a6a8f-4d1b-44fc-9a43-3166e582db73)\\\": rpc error: code = Unknown desc = failed to set up sandbox container \\\"50a4b9c1073c01deb2038992f9533c976151df62452a9f2a2982d6f1d3d51473\\\" network for pod \\\"calico-kube-controllers-8594699699-ftdtm\\\": networkPlugin cni failed to set up pod \\\"calico-kube-controllers-8594699699-ftdtm_kube-system\\\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/calico-kube-controllers-8594699699-ftdtm" podUID=198a6a8f-4d1b-44fc-9a43-3166e582db73 Feb 21 09:03:45 calico-20220221084934-6550 kubelet[2000]: I0221 09:03:45.850866 2000 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"coredns-64897985d-r75jc_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"c0c2b655a97ae019aa88cc9432fc59faccab48fe8673445b88480e27e6e1d1f9\"" Feb 21 09:03:45 calico-20220221084934-6550 kubelet[2000]: I0221 09:03:45.873464 2000 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="c0c2b655a97ae019aa88cc9432fc59faccab48fe8673445b88480e27e6e1d1f9" Feb 21 09:03:45 calico-20220221084934-6550 kubelet[2000]: I0221 09:03:45.875481 2000 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"c0c2b655a97ae019aa88cc9432fc59faccab48fe8673445b88480e27e6e1d1f9\"" Feb 21 09:03:45 calico-20220221084934-6550 kubelet[2000]: I0221 09:03:45.880374 2000 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"calico-kube-controllers-8594699699-ftdtm_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"50a4b9c1073c01deb2038992f9533c976151df62452a9f2a2982d6f1d3d51473\"" Feb 21 09:03:45 calico-20220221084934-6550 kubelet[2000]: I0221 09:03:45.902960 2000 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="50a4b9c1073c01deb2038992f9533c976151df62452a9f2a2982d6f1d3d51473" Feb 21 09:03:45 calico-20220221084934-6550 kubelet[2000]: I0221 09:03:45.904900 2000 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"50a4b9c1073c01deb2038992f9533c976151df62452a9f2a2982d6f1d3d51473\"" Feb 21 09:03:46 calico-20220221084934-6550 kubelet[2000]: E0221 09:03:46.706342 2000 cni.go:362] "Error adding pod to network" err="stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-64897985d-r75jc" podSandboxID={Type:docker ID:a3fbe31f1502211400f87005d0a496236112c0851b47684c0e566d01f4018495} podNetnsPath="/proc/168070/ns/net" networkType="calico" networkName="k8s-pod-network" Feb 21 09:03:46 calico-20220221084934-6550 kubelet[2000]: E0221 09:03:46.711457 2000 cni.go:362] "Error adding pod to network" err="stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/calico-kube-controllers-8594699699-ftdtm" podSandboxID={Type:docker ID:64b1f2be9066823ac47543a08725a98359fcb14d7dd9d60c1f0e534433c5021e} podNetnsPath="/proc/168108/ns/net" networkType="calico" networkName="k8s-pod-network" Feb 21 09:03:46 calico-20220221084934-6550 kubelet[2000]: E0221 09:03:46.943006 2000 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"a3fbe31f1502211400f87005d0a496236112c0851b47684c0e566d01f4018495\" network for pod \"coredns-64897985d-r75jc\": networkPlugin cni failed to set up pod \"coredns-64897985d-r75jc_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 21 09:03:46 calico-20220221084934-6550 kubelet[2000]: E0221 09:03:46.943091 2000 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to set up sandbox container \"a3fbe31f1502211400f87005d0a496236112c0851b47684c0e566d01f4018495\" network for pod \"coredns-64897985d-r75jc\": networkPlugin cni failed to set up pod \"coredns-64897985d-r75jc_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-64897985d-r75jc" Feb 21 09:03:46 calico-20220221084934-6550 kubelet[2000]: E0221 09:03:46.943122 2000 kuberuntime_manager.go:832] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"a3fbe31f1502211400f87005d0a496236112c0851b47684c0e566d01f4018495\" network for pod \"coredns-64897985d-r75jc\": networkPlugin cni failed to set up pod \"coredns-64897985d-r75jc_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-64897985d-r75jc" Feb 21 09:03:46 calico-20220221084934-6550 kubelet[2000]: E0221 09:03:46.943178 2000 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-64897985d-r75jc_kube-system(8b61f5f5-e695-42e1-8247-797a3d90eef7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-64897985d-r75jc_kube-system(8b61f5f5-e695-42e1-8247-797a3d90eef7)\\\": rpc error: code = Unknown desc = failed to set up sandbox container \\\"a3fbe31f1502211400f87005d0a496236112c0851b47684c0e566d01f4018495\\\" network for pod \\\"coredns-64897985d-r75jc\\\": networkPlugin cni failed to set up pod \\\"coredns-64897985d-r75jc_kube-system\\\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-64897985d-r75jc" podUID=8b61f5f5-e695-42e1-8247-797a3d90eef7 Feb 21 09:03:46 calico-20220221084934-6550 kubelet[2000]: I0221 09:03:46.944023 2000 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"coredns-64897985d-r75jc_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a3fbe31f1502211400f87005d0a496236112c0851b47684c0e566d01f4018495\"" Feb 21 09:03:46 calico-20220221084934-6550 kubelet[2000]: E0221 09:03:46.944936 2000 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"64b1f2be9066823ac47543a08725a98359fcb14d7dd9d60c1f0e534433c5021e\" network for pod \"calico-kube-controllers-8594699699-ftdtm\": networkPlugin cni failed to set up pod \"calico-kube-controllers-8594699699-ftdtm_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 21 09:03:46 calico-20220221084934-6550 kubelet[2000]: E0221 09:03:46.944989 2000 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to set up sandbox container \"64b1f2be9066823ac47543a08725a98359fcb14d7dd9d60c1f0e534433c5021e\" network for pod \"calico-kube-controllers-8594699699-ftdtm\": networkPlugin cni failed to set up pod \"calico-kube-controllers-8594699699-ftdtm_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/calico-kube-controllers-8594699699-ftdtm" Feb 21 09:03:46 calico-20220221084934-6550 kubelet[2000]: E0221 09:03:46.945017 2000 kuberuntime_manager.go:832] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"64b1f2be9066823ac47543a08725a98359fcb14d7dd9d60c1f0e534433c5021e\" network for pod \"calico-kube-controllers-8594699699-ftdtm\": networkPlugin cni failed to set up pod \"calico-kube-controllers-8594699699-ftdtm_kube-system\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/calico-kube-controllers-8594699699-ftdtm" Feb 21 09:03:46 calico-20220221084934-6550 kubelet[2000]: E0221 09:03:46.945101 2000 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8594699699-ftdtm_kube-system(198a6a8f-4d1b-44fc-9a43-3166e582db73)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8594699699-ftdtm_kube-system(198a6a8f-4d1b-44fc-9a43-3166e582db73)\\\": rpc error: code = Unknown desc = failed to set up sandbox container \\\"64b1f2be9066823ac47543a08725a98359fcb14d7dd9d60c1f0e534433c5021e\\\" network for pod \\\"calico-kube-controllers-8594699699-ftdtm\\\": networkPlugin cni failed to set up pod \\\"calico-kube-controllers-8594699699-ftdtm_kube-system\\\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/calico-kube-controllers-8594699699-ftdtm" podUID=198a6a8f-4d1b-44fc-9a43-3166e582db73 Feb 21 09:03:47 calico-20220221084934-6550 kubelet[2000]: I0221 09:03:47.020379 2000 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"calico-kube-controllers-8594699699-ftdtm_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"64b1f2be9066823ac47543a08725a98359fcb14d7dd9d60c1f0e534433c5021e\"" Feb 21 09:03:47 calico-20220221084934-6550 kubelet[2000]: I0221 09:03:47.045975 2000 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"64b1f2be9066823ac47543a08725a98359fcb14d7dd9d60c1f0e534433c5021e\"" * * ==> storage-provisioner [6d88e003ae4d] <== * I0221 09:00:41.449434 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... F0221 09:01:11.451563 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout -- /stdout -- helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p calico-20220221084934-6550 -n calico-20220221084934-6550 helpers_test.go:262: (dbg) Run: kubectl --context calico-20220221084934-6550 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running helpers_test.go:271: non-running pods: calico-kube-controllers-8594699699-ftdtm coredns-64897985d-r75jc helpers_test.go:273: ======> post-mortem[TestNetworkPlugins/group/calico]: describe non-running pods <====== helpers_test.go:276: (dbg) Run: kubectl --context calico-20220221084934-6550 describe pod calico-kube-controllers-8594699699-ftdtm coredns-64897985d-r75jc helpers_test.go:276: (dbg) Non-zero exit: kubectl --context calico-20220221084934-6550 describe pod calico-kube-controllers-8594699699-ftdtm coredns-64897985d-r75jc: exit status 1 (68.83274ms) ** stderr ** Error from server (NotFound): pods "calico-kube-controllers-8594699699-ftdtm" not found Error from server (NotFound): pods "coredns-64897985d-r75jc" not found ** /stderr ** helpers_test.go:278: kubectl --context calico-20220221084934-6550 describe pod calico-kube-controllers-8594699699-ftdtm coredns-64897985d-r75jc: exit status 1 helpers_test.go:176: Cleaning up "calico-20220221084934-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p calico-20220221084934-6550 helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p calico-20220221084934-6550: (2.893399255s) --- FAIL: TestNetworkPlugins/group/calico (559.37s) === FAIL: . TestNetworkPlugins/group/auto/DNS (322.31s) net_test.go:163: (dbg) Run: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.157342824s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.159318154s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 09:02:30.569358 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory net_test.go:163: (dbg) Run: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.126372066s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default E0221 09:02:54.642100 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.150214828s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.172290442s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.151381391s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default E0221 09:04:05.983652 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.156779625s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.13259579s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 09:04:33.148416 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory net_test.go:163: (dbg) Run: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.126335273s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 09:05:10.800096 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory net_test.go:163: (dbg) Run: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.133340056s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.133325385s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 09:06:16.369831 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory E0221 09:06:16.375077 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory E0221 09:06:16.385327 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory E0221 09:06:16.405618 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory E0221 09:06:16.445952 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory E0221 09:06:16.526233 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory E0221 09:06:16.686635 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory E0221 09:06:17.007118 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory E0221 09:06:17.648208 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory E0221 09:06:18.928460 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory net_test.go:163: (dbg) Run: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.144808472s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1 net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"* --- FAIL: TestNetworkPlugins/group/auto/DNS (322.31s) === FAIL: . TestNetworkPlugins/group/auto (836.10s) net_test.go:198: "auto" test finished in 17m46.663428147s, failed=true net_test.go:199: *** TestNetworkPlugins/group/auto FAILED at 2022-02-21 09:07:20.424948873 +0000 UTC m=+2533.187268467 helpers_test.go:223: -----------------------post-mortem-------------------------------- helpers_test.go:231: ======> post-mortem[TestNetworkPlugins/group/auto]: docker inspect <====== helpers_test.go:232: (dbg) Run: docker inspect auto-20220221084933-6550 helpers_test.go:236: (dbg) docker inspect auto-20220221084933-6550: -- stdout -- [ { "Id": "14e23cc18317663f9fe57cb405230b41185fa9f7f233ca15f6ec32d64c887e9d", "Created": "2022-02-21T08:56:59.944923949Z", "Path": "/usr/local/bin/entrypoint", "Args": [ "/sbin/init" ], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 275744, "ExitCode": 0, "Error": "", "StartedAt": "2022-02-21T08:57:00.400031147Z", "FinishedAt": "0001-01-01T00:00:00Z" }, "Image": "sha256:1312ccd2422d964b2df363d606d0c016d6acbc1ddf0211c26a74717f2897dc43", "ResolvConfPath": "/var/lib/docker/containers/14e23cc18317663f9fe57cb405230b41185fa9f7f233ca15f6ec32d64c887e9d/resolv.conf", "HostnamePath": "/var/lib/docker/containers/14e23cc18317663f9fe57cb405230b41185fa9f7f233ca15f6ec32d64c887e9d/hostname", "HostsPath": "/var/lib/docker/containers/14e23cc18317663f9fe57cb405230b41185fa9f7f233ca15f6ec32d64c887e9d/hosts", "LogPath": "/var/lib/docker/containers/14e23cc18317663f9fe57cb405230b41185fa9f7f233ca15f6ec32d64c887e9d/14e23cc18317663f9fe57cb405230b41185fa9f7f233ca15f6ec32d64c887e9d-json.log", "Name": "/auto-20220221084933-6550", "RestartCount": 0, "Driver": "overlay2", "Platform": "linux", "MountLabel": "", "ProcessLabel": "", "AppArmorProfile": "unconfined", "ExecIDs": null, "HostConfig": { "Binds": [ "/lib/modules:/lib/modules:ro", "auto-20220221084933-6550:/var" ], "ContainerIDFile": "", "LogConfig": { "Type": "json-file", "Config": {} }, "NetworkMode": "auto-20220221084933-6550", "PortBindings": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ] }, "RestartPolicy": { "Name": "no", "MaximumRetryCount": 0 }, "AutoRemove": false, "VolumeDriver": "", "VolumesFrom": null, "CapAdd": null, "CapDrop": null, "CgroupnsMode": "host", "Dns": [], "DnsOptions": [], "DnsSearch": [], "ExtraHosts": null, "GroupAdd": null, "IpcMode": "private", "Cgroup": "", "Links": null, "OomScoreAdj": 0, "PidMode": "", "Privileged": true, "PublishAllPorts": false, "ReadonlyRootfs": false, "SecurityOpt": [ "seccomp=unconfined", "apparmor=unconfined", "label=disable" ], "Tmpfs": { "/run": "", "/tmp": "" }, "UTSMode": "", "UsernsMode": "", "ShmSize": 67108864, "Runtime": "runc", "ConsoleSize": [ 0, 0 ], "Isolation": "", "CpuShares": 0, "Memory": 0, "NanoCpus": 2000000000, "CgroupParent": "", "BlkioWeight": 0, "BlkioWeightDevice": [], "BlkioDeviceReadBps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteIOps": null, "CpuPeriod": 0, "CpuQuota": 0, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": [ { "PathOnHost": "/dev/fuse", "PathInContainer": "/dev/fuse", "CgroupPermissions": "rwm" } ], "DeviceCgroupRules": null, "DeviceRequests": null, "KernelMemory": 0, "KernelMemoryTCP": 0, "MemoryReservation": 0, "MemorySwap": 0, "MemorySwappiness": null, "OomKillDisable": false, "PidsLimit": null, "Ulimits": null, "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0, "MaskedPaths": null, "ReadonlyPaths": null }, "GraphDriver": { "Data": { "LowerDir": "/var/lib/docker/overlay2/5ab0074a8ea0796f69fee69831f0118dd2c6851670ee5682833df7e80c58ce88-init/diff:/var/lib/docker/overlay2/a149cc5f42de5038ba3e5f21bd82a6356aa9456aec11a3690e569db72ca8eaf5/diff:/var/lib/docker/overlay2/fce4ef34d92ff253276822af091ec16578b924486e0e565c7f34b479bdbe1606/diff:/var/lib/docker/overlay2/a38e10ed288ad73a8ce637c6219d5b13318bb72210ce4fb6291dd57d238df611/diff:/var/lib/docker/overlay2/93c18ddce49a4eb2d344b0579d7f58df999d53bb329042f78a5b3f40296f83cd/diff:/var/lib/docker/overlay2/d9ee14262b31fff87924cf913d835b02182e6a8dc3783ecb5f67dbd29faa079c/diff:/var/lib/docker/overlay2/8a0b39fc5ebcb9761ad433102244050fab339d3e027a5a30a7e01d121fe362e4/diff:/var/lib/docker/overlay2/7a6b41deef37b6400fac0c94871f73040d4cae157de2e929b7a0bf96f0e353aa/diff:/var/lib/docker/overlay2/49d913ee587718efcf521a65de0109df5c2c42644f75690beed97a437d623b66/diff:/var/lib/docker/overlay2/89e8fa960313dfb6ece4007576bf5d89e2b0ea68a2bde9de5527647a7df6ff0f/diff:/var/lib/docker/overlay2/0d2632e6ee2f4198f3c7c0d37c31ee03d33864e0131ae85dd3001adecfae53db/diff:/var/lib/docker/overlay2/b28a3b2bbe2e231d00863aaaf6be28d2e5a76197f2be7934f18720a71f2435a8/diff:/var/lib/docker/overlay2/6f88e83ff5bb26bf909248d8df5eacf4d01f72523927fd7433a925c525fbd274/diff:/var/lib/docker/overlay2/c2547d18d2639fa842b0977952fa69cffd7b076e8db357ef0ec62fae9676619b/diff:/var/lib/docker/overlay2/34465ba37bb8eef87906c38cb891a738c54a16dbf47d27fef95cee96f0b62c4f/diff:/var/lib/docker/overlay2/50d3383518bb8e6a01e1cfcae621b8c2238dc54ae94d63606d145633045bc6cd/diff:/var/lib/docker/overlay2/83561f107aed2bf9cb840ff66aceb882d0e39bcced79ef27ce67c17e063ba536/diff:/var/lib/docker/overlay2/bfb7c0c91e0a7649286abe3a6b02df4585a931ffa3a6b1e4846aa1b98eb0d997/diff:/var/lib/docker/overlay2/80589044da151f3d3eed7824d236bc8a19697182abca8610eb34f1c3cae53bf0/diff:/var/lib/docker/overlay2/5a336e956e467ad50830aa4b9ed5b65cb3ef08e6d49a4900acd0440ca3c03c3a/diff:/var/lib/docker/overlay2/f7b3446c0dca80a2053183e360de8f99e3a31e87ad633fcc23f14b2045643596/diff:/var/lib/docker/overlay2/03688a16272f13f172ed8baa181df2a1702e08edbc9c5a4a767265adb2c00b79/diff:/var/lib/docker/overlay2/c5445afaabab58221dbffedde261b3f9948116ccba1344b6f3dd4fdfaf5c98cd/diff:/var/lib/docker/overlay2/704464fd20f23f93d0ad3c3c58103f01fc8a0d76d70bc191f718273e7412bb42/diff:/var/lib/docker/overlay2/a344f9ca1bf1b674ebf560c2a0498913ed90c836e3b4f5b1968022f7f7e54055/diff:/var/lib/docker/overlay2/929f053e6a327a7203e8d8c281f827d02ad922fdabbdfcc5453003daba88cdf4/diff:/var/lib/docker/overlay2/d6a160f0ffe94e5abf99d928a638f6a6220b24c7ba643a438c7557146d240f70/diff:/var/lib/docker/overlay2/c6b58e187a2df10fd55e19e6ad38c20f5ebf5e62617a59b6ce4afe0fc712c0f6/diff:/var/lib/docker/overlay2/fef43deeff878af42cc1f31a1f265b66e642aa166cec02eba5a0c82fb6d70872/diff:/var/lib/docker/overlay2/0d7bbf19182056a05efca89485df54da5fba9903000517fc9e7c2072aa7ba62e/diff:/var/lib/docker/overlay2/5840748964e9ce3890a2fa4fc2e3fb0635426f944b42b3128808a7aca816bf48/diff:/var/lib/docker/overlay2/a94126962e9f346960e3f570b96e0a6e987d255fcba30bf3fd76b227e66275c6/diff:/var/lib/docker/overlay2/211ba88e1557a352838b4896dff8efac70330c1424cff237613c4186994ec037/diff:/var/lib/docker/overlay2/c04720d23411d6693832990e2c79bc529ade609401eaeee8d159f5a6d74d986e/diff:/var/lib/docker/overlay2/70bfe328c89d3a3902a4ba591874fe936425b208ec8e39c2e25d60f21b212300/diff:/var/lib/docker/overlay2/7dcce8bec6dd9561387116c9f724b9101e94e15c0e19c7880ce2b4e96b2921a6/diff:/var/lib/docker/overlay2/81463813805e91be42d9f71b5c63823a80abaaa42d8638034eaddcd66c148310/diff:/var/lib/docker/overlay2/c7dbb1344cf7167b07d58d49902e0db9335bc92a71a324c15dadc14c0aa67c45/diff:/var/lib/docker/overlay2/909df0d6b92b92bcf6e376f0e33d120c53cd24ad0b997d31b5a84f3d1ab89769/diff:/var/lib/docker/overlay2/eacfeaee6ed4aa14e847adad2c6d03439c53944654cc2779cc5c80d239ab323c/diff:/var/lib/docker/overlay2/48c58492169f9043495b11b47439f4dcdeb168525e5bb3a8c130c1ddd5ec882d/diff:/var/lib/docker/overlay2/4520a69f91e766b7694473c2107cc0644394d65da900259eb5933779accdc86d/diff:/var/lib/docker/overlay2/68aafdd300c4b4ebf94976a352959e75fa170653a69e13da38650a447cd5cb2f/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394cfc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff", "MergedDir": "/var/lib/docker/overlay2/5ab0074a8ea0796f69fee69831f0118dd2c6851670ee5682833df7e80c58ce88/merged", "UpperDir": "/var/lib/docker/overlay2/5ab0074a8ea0796f69fee69831f0118dd2c6851670ee5682833df7e80c58ce88/diff", "WorkDir": "/var/lib/docker/overlay2/5ab0074a8ea0796f69fee69831f0118dd2c6851670ee5682833df7e80c58ce88/work" }, "Name": "overlay2" }, "Mounts": [ { "Type": "bind", "Source": "/lib/modules", "Destination": "/lib/modules", "Mode": "ro", "RW": false, "Propagation": "rprivate" }, { "Type": "volume", "Name": "auto-20220221084933-6550", "Source": "/var/lib/docker/volumes/auto-20220221084933-6550/_data", "Destination": "/var", "Driver": "local", "Mode": "z", "RW": true, "Propagation": "" } ], "Config": { "Hostname": "auto-20220221084933-6550", "Domainname": "", "User": "root", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "ExposedPorts": { "22/tcp": {}, "2376/tcp": {}, "32443/tcp": {}, "5000/tcp": {}, "8443/tcp": {} }, "Tty": true, "OpenStdin": false, "StdinOnce": false, "Env": [ "container=docker", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" ], "Cmd": null, "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2", "Volumes": null, "WorkingDir": "", "Entrypoint": [ "/usr/local/bin/entrypoint", "/sbin/init" ], "OnBuild": null, "Labels": { "created_by.minikube.sigs.k8s.io": "true", "mode.minikube.sigs.k8s.io": "auto-20220221084933-6550", "name.minikube.sigs.k8s.io": "auto-20220221084933-6550", "role.minikube.sigs.k8s.io": "" }, "StopSignal": "SIGRTMIN+3" }, "NetworkSettings": { "Bridge": "", "SandboxID": "bc21dc1487002ea911d18ddad607e56bce375fd30c415325f2c8ad8a51175f58", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49379" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49378" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49375" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49377" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49376" } ] }, "SandboxKey": "/var/run/docker/netns/bc21dc148700", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null, "EndpointID": "", "Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "MacAddress": "", "Networks": { "auto-20220221084933-6550": { "IPAMConfig": { "IPv4Address": "192.168.76.2" }, "Links": null, "Aliases": [ "14e23cc18317", "auto-20220221084933-6550" ], "NetworkID": "b94a766473076f24d64d27d7767effe55cd2409ed2b6d5964dc439f32cedab19", "EndpointID": "18759bcb1f5666f27fc37ee6b69b003f2272f85d2456e612f032471e98395eee", "Gateway": "192.168.76.1", "IPAddress": "192.168.76.2", "IPPrefixLen": 24, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:c0:a8:4c:02", "DriverOpts": null } } } } ] -- /stdout -- helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p auto-20220221084933-6550 -n auto-20220221084933-6550 helpers_test.go:245: <<< TestNetworkPlugins/group/auto FAILED: start of post-mortem logs <<< helpers_test.go:246: ======> post-mortem[TestNetworkPlugins/group/auto]: minikube logs <====== helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 -p auto-20220221084933-6550 logs -n 25 helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p auto-20220221084933-6550 logs -n 25: (1.215894629s) helpers_test.go:253: TestNetworkPlugins/group/auto logs: -- stdout -- * * ==> Audit <== * |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| | pause | -p pause-20220221085158-6550 | pause-20220221085158-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:28 UTC | Mon, 21 Feb 2022 08:53:29 UTC | | | --alsologtostderr -v=5 | | | | | | | delete | -p | kubernetes-upgrade-20220221085141-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:22 UTC | Mon, 21 Feb 2022 08:53:29 UTC | | | kubernetes-upgrade-20220221085141-6550 | | | | | | | delete | -p pause-20220221085158-6550 | pause-20220221085158-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:29 UTC | Mon, 21 Feb 2022 08:53:32 UTC | | | --alsologtostderr -v=5 | | | | | | | profile | list --output json | minikube | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:32 UTC | Mon, 21 Feb 2022 08:53:32 UTC | | delete | -p pause-20220221085158-6550 | pause-20220221085158-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:33 UTC | Mon, 21 Feb 2022 08:53:33 UTC | | start | -p | stopped-upgrade-20220221085315-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:00 UTC | Mon, 21 Feb 2022 08:54:26 UTC | | | stopped-upgrade-20220221085315-6550 | | | | | | | | --memory=2200 --alsologtostderr | | | | | | | | -v=1 --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | logs | -p | stopped-upgrade-20220221085315-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:27 UTC | Mon, 21 Feb 2022 08:54:29 UTC | | | stopped-upgrade-20220221085315-6550 | | | | | | | delete | -p | stopped-upgrade-20220221085315-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:29 UTC | Mon, 21 Feb 2022 08:54:31 UTC | | | stopped-upgrade-20220221085315-6550 | | | | | | | start | -p | cert-expiration-20220221085105-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:40 UTC | Mon, 21 Feb 2022 08:54:44 UTC | | | cert-expiration-20220221085105-6550 | | | | | | | | --memory=2048 | | | | | | | | --cert-expiration=8760h | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | delete | -p | cert-expiration-20220221085105-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:44 UTC | Mon, 21 Feb 2022 08:54:47 UTC | | | cert-expiration-20220221085105-6550 | | | | | | | start | -p cilium-20220221084934-6550 | cilium-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:33 UTC | Mon, 21 Feb 2022 08:55:10 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=cilium --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p cilium-20220221084934-6550 | cilium-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:55:15 UTC | Mon, 21 Feb 2022 08:55:16 UTC | | | pgrep -a kubelet | | | | | | | delete | -p cilium-20220221084934-6550 | cilium-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:55:29 UTC | Mon, 21 Feb 2022 08:55:33 UTC | | start | -p false-20220221084934-6550 | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:55:33 UTC | Mon, 21 Feb 2022 08:56:15 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=false --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p false-20220221084934-6550 | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:56:15 UTC | Mon, 21 Feb 2022 08:56:16 UTC | | | pgrep -a kubelet | | | | | | | start | -p auto-20220221084933-6550 | auto-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:29 UTC | Mon, 21 Feb 2022 09:01:45 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p auto-20220221084933-6550 | auto-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:01:45 UTC | Mon, 21 Feb 2022 09:01:45 UTC | | | pgrep -a kubelet | | | | | | | -p | false-20220221084934-6550 logs | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:02:41 UTC | Mon, 21 Feb 2022 09:02:42 UTC | | | -n 25 | | | | | | | delete | -p false-20220221084934-6550 | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:02:43 UTC | Mon, 21 Feb 2022 09:02:46 UTC | | -p | custom-weave-20220221084934-6550 | custom-weave-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:27 UTC | Mon, 21 Feb 2022 09:03:28 UTC | | | logs -n 25 | | | | | | | delete | -p | custom-weave-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:29 UTC | Mon, 21 Feb 2022 09:03:32 UTC | | | custom-weave-20220221084934-6550 | | | | | | | start | -p kindnet-20220221084934-6550 | kindnet-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:02:46 UTC | Mon, 21 Feb 2022 09:03:35 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=kindnet --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p kindnet-20220221084934-6550 | kindnet-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:40 UTC | Mon, 21 Feb 2022 09:03:40 UTC | | | pgrep -a kubelet | | | | | | | -p | calico-20220221084934-6550 | calico-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:45 UTC | Mon, 21 Feb 2022 09:03:47 UTC | | | logs -n 25 | | | | | | | delete | -p calico-20220221084934-6550 | calico-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:48 UTC | Mon, 21 Feb 2022 09:03:50 UTC | |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| * * ==> Last Start <== * Log file created at: 2022/02/21 09:03:51 Running on machine: ubuntu-20-agent-5 Binary: Built with gc go1.17.7 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0221 09:03:51.048978 450843 out.go:297] Setting OutFile to fd 1 ... I0221 09:03:51.049079 450843 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 09:03:51.049091 450843 out.go:310] Setting ErrFile to fd 2... I0221 09:03:51.049098 450843 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 09:03:51.049264 450843 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/bin I0221 09:03:51.049642 450843 out.go:304] Setting JSON to false I0221 09:03:51.072350 450843 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2785,"bootTime":1645431446,"procs":576,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"} I0221 09:03:51.072451 450843 start.go:122] virtualization: kvm guest I0221 09:03:51.075112 450843 out.go:176] * [bridge-20220221084933-6550] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64) I0221 09:03:51.076523 450843 out.go:176] - MINIKUBE_LOCATION=13641 I0221 09:03:51.075281 450843 notify.go:193] Checking for updates... I0221 09:03:51.077790 450843 out.go:176] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true I0221 09:03:51.079195 450843 out.go:176] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 09:03:51.080510 450843 out.go:176] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube I0221 09:03:51.081799 450843 out.go:176] - MINIKUBE_BIN=out/minikube-linux-amd64 I0221 09:03:51.082286 450843 config.go:176] Loaded profile config "auto-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:03:51.082382 450843 config.go:176] Loaded profile config "enable-default-cni-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:03:51.082456 450843 config.go:176] Loaded profile config "kindnet-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:03:51.082505 450843 driver.go:344] Setting default libvirt URI to qemu:///system I0221 09:03:51.135679 450843 docker.go:132] docker version: linux-20.10.12 I0221 09:03:51.135786 450843 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 09:03:51.248907 450843 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-02-21 09:03:51.169795922 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 09:03:51.249068 450843 docker.go:237] overlay module found I0221 09:03:51.252015 450843 out.go:176] * Using the docker driver based on user configuration I0221 09:03:51.252048 450843 start.go:281] selected driver: docker I0221 09:03:51.252053 450843 start.go:798] validating driver "docker" against I0221 09:03:51.252073 450843 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} W0221 09:03:51.252125 450843 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. W0221 09:03:51.252146 450843 out.go:241] ! Your cgroup does not allow setting memory. I0221 09:03:51.253396 450843 out.go:176] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities I0221 09:03:51.253973 450843 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 09:03:51.362904 450843 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-02-21 09:03:51.284159132 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 09:03:51.363053 450843 start_flags.go:288] no existing cluster config was found, will generate one from the flags I0221 09:03:51.363204 450843 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m I0221 09:03:51.363228 450843 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] I0221 09:03:51.363244 450843 cni.go:93] Creating CNI manager for "bridge" I0221 09:03:51.363252 450843 start_flags.go:297] Found "bridge CNI" CNI - setting NetworkPlugin=cni I0221 09:03:51.363269 450843 start_flags.go:302] config: {Name:bridge-20220221084933-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:bridge-20220221084933-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 09:03:51.366013 450843 out.go:176] * Starting control plane node bridge-20220221084933-6550 in cluster bridge-20220221084933-6550 I0221 09:03:51.366043 450843 cache.go:120] Beginning downloading kic base image for docker with docker I0221 09:03:51.367305 450843 out.go:176] * Pulling base image ... I0221 09:03:51.367334 450843 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 09:03:51.367368 450843 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4 I0221 09:03:51.367383 450843 cache.go:57] Caching tarball of preloaded images I0221 09:03:51.367436 450843 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon I0221 09:03:51.367599 450843 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0221 09:03:51.367626 450843 cache.go:60] Finished verifying existence of preloaded tar for v1.23.4 on docker I0221 09:03:51.367731 450843 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/config.json ... I0221 09:03:51.367754 450843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/config.json: {Name:mk9f30a296298673b7d3985a1a22baf15a0d8519 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:03:51.418870 450843 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon, skipping pull I0221 09:03:51.418908 450843 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 exists in daemon, skipping load I0221 09:03:51.418928 450843 cache.go:208] Successfully downloaded all kic artifacts I0221 09:03:51.418966 450843 start.go:313] acquiring machines lock for bridge-20220221084933-6550: {Name:mk5df6888113cf2604548c3a60d88507d1709053 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:03:51.419213 450843 start.go:317] acquired machines lock for "bridge-20220221084933-6550" in 225.518µs I0221 09:03:51.419251 450843 start.go:89] Provisioning new machine with config: &{Name:bridge-20220221084933-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:bridge-20220221084933-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true} I0221 09:03:51.419356 450843 start.go:126] createHost starting for "" (driver="docker") I0221 09:03:49.110101 442801 out.go:203] - Booting up control plane ... I0221 09:03:51.421794 450843 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ... I0221 09:03:51.422033 450843 start.go:160] libmachine.API.Create for "bridge-20220221084933-6550" (driver="docker") I0221 09:03:51.422065 450843 client.go:168] LocalClient.Create starting I0221 09:03:51.422157 450843 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem I0221 09:03:51.422198 450843 main.go:130] libmachine: Decoding PEM data... I0221 09:03:51.422218 450843 main.go:130] libmachine: Parsing certificate... I0221 09:03:51.422289 450843 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem I0221 09:03:51.422318 450843 main.go:130] libmachine: Decoding PEM data... I0221 09:03:51.422337 450843 main.go:130] libmachine: Parsing certificate... I0221 09:03:51.422664 450843 cli_runner.go:133] Run: docker network inspect bridge-20220221084933-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0221 09:03:51.458778 450843 cli_runner.go:180] docker network inspect bridge-20220221084933-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0221 09:03:51.458867 450843 network_create.go:254] running [docker network inspect bridge-20220221084933-6550] to gather additional debugging logs... I0221 09:03:51.458907 450843 cli_runner.go:133] Run: docker network inspect bridge-20220221084933-6550 W0221 09:03:51.492681 450843 cli_runner.go:180] docker network inspect bridge-20220221084933-6550 returned with exit code 1 I0221 09:03:51.492712 450843 network_create.go:257] error running [docker network inspect bridge-20220221084933-6550]: docker network inspect bridge-20220221084933-6550: exit status 1 stdout: [] stderr: Error: No such network: bridge-20220221084933-6550 I0221 09:03:51.492727 450843 network_create.go:259] output of [docker network inspect bridge-20220221084933-6550]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: bridge-20220221084933-6550 ** /stderr ** I0221 09:03:51.492765 450843 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 09:03:51.534844 450843 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-5d96ab4d6b1a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:58:0b:cb:43}} I0221 09:03:51.535740 450843 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-3436ceea5013 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:ca:78:ad:42}} I0221 09:03:51.536644 450843 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc0009c01d0] misses:0} I0221 09:03:51.536695 450843 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}} I0221 09:03:51.536710 450843 network_create.go:106] attempt to create docker network bridge-20220221084933-6550 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ... I0221 09:03:51.536770 450843 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220221084933-6550 I0221 09:03:51.614280 450843 network_create.go:90] docker network bridge-20220221084933-6550 192.168.67.0/24 created I0221 09:03:51.614320 450843 kic.go:106] calculated static IP "192.168.67.2" for the "bridge-20220221084933-6550" container I0221 09:03:51.614378 450843 cli_runner.go:133] Run: docker ps -a --format {{.Names}} I0221 09:03:51.655290 450843 cli_runner.go:133] Run: docker volume create bridge-20220221084933-6550 --label name.minikube.sigs.k8s.io=bridge-20220221084933-6550 --label created_by.minikube.sigs.k8s.io=true I0221 09:03:51.695234 450843 oci.go:102] Successfully created a docker volume bridge-20220221084933-6550 I0221 09:03:51.695312 450843 cli_runner.go:133] Run: docker run --rm --name bridge-20220221084933-6550-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-20220221084933-6550 --entrypoint /usr/bin/test -v bridge-20220221084933-6550:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -d /var/lib I0221 09:03:52.317204 450843 oci.go:106] Successfully prepared a docker volume bridge-20220221084933-6550 I0221 09:03:52.317259 450843 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 09:03:52.317279 450843 kic.go:179] Starting extracting preloaded images to volume ... I0221 09:03:52.317369 450843 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v bridge-20220221084933-6550:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir I0221 09:03:58.456628 442801 out.go:203] - Configuring RBAC rules ... I0221 09:04:01.103747 442801 cni.go:93] Creating CNI manager for "bridge" I0221 09:04:01.108470 442801 out.go:176] * Configuring bridge CNI (Container Networking Interface) ... I0221 09:04:01.108564 442801 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d I0221 09:04:01.117961 442801 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes) I0221 09:04:01.137449 442801 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0221 09:04:01.137607 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:01.137732 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=9a4c39b14752448ee8b5da7f8bb397c2a16d9ea9 minikube.k8s.io/name=enable-default-cni-20220221084933-6550 minikube.k8s.io/updated_at=2022_02_21T09_04_01_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:01.437957 442801 ops.go:34] apiserver oom_adj: -16 I0221 09:04:01.438080 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:02.000868 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:01.030789 450843 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v bridge-20220221084933-6550:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir: (8.713376176s) I0221 09:04:01.030835 450843 kic.go:188] duration metric: took 8.713550 seconds to extract preloaded images to volume W0221 09:04:01.030877 450843 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted. W0221 09:04:01.030890 450843 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. I0221 09:04:01.030935 450843 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'" I0221 09:04:01.149423 450843 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname bridge-20220221084933-6550 --name bridge-20220221084933-6550 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-20220221084933-6550 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=bridge-20220221084933-6550 --network bridge-20220221084933-6550 --ip 192.168.67.2 --volume bridge-20220221084933-6550:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 I0221 09:04:01.619785 450843 cli_runner.go:133] Run: docker container inspect bridge-20220221084933-6550 --format={{.State.Running}} I0221 09:04:01.657132 450843 cli_runner.go:133] Run: docker container inspect bridge-20220221084933-6550 --format={{.State.Status}} I0221 09:04:01.691906 450843 cli_runner.go:133] Run: docker exec bridge-20220221084933-6550 stat /var/lib/dpkg/alternatives/iptables I0221 09:04:01.762525 450843 oci.go:281] the created container "bridge-20220221084933-6550" has a running status. I0221 09:04:01.762561 450843 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/bridge-20220221084933-6550/id_rsa... I0221 09:04:01.825039 450843 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/bridge-20220221084933-6550/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0221 09:04:01.921998 450843 cli_runner.go:133] Run: docker container inspect bridge-20220221084933-6550 --format={{.State.Status}} I0221 09:04:01.965911 450843 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0221 09:04:01.965932 450843 kic_runner.go:114] Args: [docker exec --privileged bridge-20220221084933-6550 chown docker:docker /home/docker/.ssh/authorized_keys] I0221 09:04:02.060550 450843 cli_runner.go:133] Run: docker container inspect bridge-20220221084933-6550 --format={{.State.Status}} I0221 09:04:02.102476 450843 machine.go:88] provisioning docker machine ... I0221 09:04:02.102515 450843 ubuntu.go:169] provisioning hostname "bridge-20220221084933-6550" I0221 09:04:02.102558 450843 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220221084933-6550 I0221 09:04:02.141339 450843 main.go:130] libmachine: Using SSH client type: native I0221 09:04:02.141561 450843 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49394 } I0221 09:04:02.141585 450843 main.go:130] libmachine: About to run SSH command: sudo hostname bridge-20220221084933-6550 && echo "bridge-20220221084933-6550" | sudo tee /etc/hostname I0221 09:04:02.276015 450843 main.go:130] libmachine: SSH cmd err, output: : bridge-20220221084933-6550 I0221 09:04:02.276101 450843 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220221084933-6550 I0221 09:04:02.310672 450843 main.go:130] libmachine: Using SSH client type: native I0221 09:04:02.310874 450843 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49394 } I0221 09:04:02.310899 450843 main.go:130] libmachine: About to run SSH command: if ! grep -xq '.*\sbridge-20220221084933-6550' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-20220221084933-6550/g' /etc/hosts; else echo '127.0.1.1 bridge-20220221084933-6550' | sudo tee -a /etc/hosts; fi fi I0221 09:04:02.435046 450843 main.go:130] libmachine: SSH cmd err, output: : I0221 09:04:02.435075 450843 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube} I0221 09:04:02.435113 450843 ubuntu.go:177] setting up certificates I0221 09:04:02.435120 450843 provision.go:83] configureAuth start I0221 09:04:02.435185 450843 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-20220221084933-6550 I0221 09:04:02.470021 450843 provision.go:138] copyHostCerts I0221 09:04:02.470092 450843 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem, removing ... I0221 09:04:02.470106 450843 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem I0221 09:04:02.470167 450843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem (1082 bytes) I0221 09:04:02.470239 450843 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem, removing ... I0221 09:04:02.470253 450843 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem I0221 09:04:02.470274 450843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem (1123 bytes) I0221 09:04:02.470357 450843 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem, removing ... I0221 09:04:02.470368 450843 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem I0221 09:04:02.470389 450843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem (1675 bytes) I0221 09:04:02.470433 450843 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem org=jenkins.bridge-20220221084933-6550 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube bridge-20220221084933-6550] I0221 09:04:02.642265 450843 provision.go:172] copyRemoteCerts I0221 09:04:02.642319 450843 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0221 09:04:02.642351 450843 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220221084933-6550 I0221 09:04:02.675755 450843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49394 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/bridge-20220221084933-6550/id_rsa Username:docker} I0221 09:04:02.762558 450843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes) I0221 09:04:02.781693 450843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes) I0221 09:04:02.799380 450843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I0221 09:04:02.817291 450843 provision.go:86] duration metric: configureAuth took 382.145126ms I0221 09:04:02.817321 450843 ubuntu.go:193] setting minikube options for container-runtime I0221 09:04:02.817512 450843 config.go:176] Loaded profile config "bridge-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:04:02.817595 450843 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220221084933-6550 I0221 09:04:02.851272 450843 main.go:130] libmachine: Using SSH client type: native I0221 09:04:02.851449 450843 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49394 } I0221 09:04:02.851469 450843 main.go:130] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0221 09:04:02.971079 450843 main.go:130] libmachine: SSH cmd err, output: : overlay I0221 09:04:02.971105 450843 ubuntu.go:71] root file system type: overlay I0221 09:04:02.971293 450843 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0221 09:04:02.971353 450843 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220221084933-6550 I0221 09:04:03.004341 450843 main.go:130] libmachine: Using SSH client type: native I0221 09:04:03.004526 450843 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49394 } I0221 09:04:03.004630 450843 main.go:130] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0221 09:04:03.136130 450843 main.go:130] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0221 09:04:03.136225 450843 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220221084933-6550 I0221 09:04:03.170767 450843 main.go:130] libmachine: Using SSH client type: native I0221 09:04:03.170945 450843 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49394 } I0221 09:04:03.170983 450843 main.go:130] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0221 09:04:03.838552 450843 main.go:130] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2021-12-13 11:43:42.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2022-02-21 09:04:03.129819641 +0000 @@ -1,30 +1,32 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com +BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target -Requires=docker.socket containerd.service +Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always - -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 +Restart=on-failure -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +34,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0221 09:04:03.838589 450843 machine.go:91] provisioned docker machine in 1.736085907s I0221 09:04:03.838598 450843 client.go:171] LocalClient.Create took 12.416525048s I0221 09:04:03.838614 450843 start.go:168] duration metric: libmachine.API.Create for "bridge-20220221084933-6550" took 12.416582656s I0221 09:04:03.838620 450843 start.go:267] post-start starting for "bridge-20220221084933-6550" (driver="docker") I0221 09:04:03.838625 450843 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0221 09:04:03.838687 450843 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0221 09:04:03.838740 450843 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220221084933-6550 I0221 09:04:03.871312 450843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49394 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/bridge-20220221084933-6550/id_rsa Username:docker} I0221 09:04:03.962963 450843 ssh_runner.go:195] Run: cat /etc/os-release I0221 09:04:03.965877 450843 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0221 09:04:03.965896 450843 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0221 09:04:03.965904 450843 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0221 09:04:03.965911 450843 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0221 09:04:03.965921 450843 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/addons for local assets ... I0221 09:04:03.965977 450843 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files for local assets ... I0221 09:04:03.966056 450843 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem -> 65502.pem in /etc/ssl/certs I0221 09:04:03.966143 450843 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs I0221 09:04:03.972869 450843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /etc/ssl/certs/65502.pem (1708 bytes) I0221 09:04:03.990511 450843 start.go:270] post-start completed in 151.880985ms I0221 09:04:03.990813 450843 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-20220221084933-6550 I0221 09:04:04.024103 450843 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/config.json ... I0221 09:04:04.024411 450843 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0221 09:04:04.024465 450843 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220221084933-6550 I0221 09:04:04.059852 450843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49394 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/bridge-20220221084933-6550/id_rsa Username:docker} I0221 09:04:04.143527 450843 start.go:129] duration metric: createHost completed in 12.724156873s I0221 09:04:04.143559 450843 start.go:80] releasing machines lock for "bridge-20220221084933-6550", held for 12.724325494s I0221 09:04:04.143657 450843 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-20220221084933-6550 I0221 09:04:04.176805 450843 ssh_runner.go:195] Run: systemctl --version I0221 09:04:04.176859 450843 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220221084933-6550 I0221 09:04:04.176889 450843 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/ I0221 09:04:04.176938 450843 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220221084933-6550 I0221 09:04:04.211410 450843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49394 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/bridge-20220221084933-6550/id_rsa Username:docker} I0221 09:04:04.211622 450843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49394 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/bridge-20220221084933-6550/id_rsa Username:docker} I0221 09:04:04.435927 450843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0221 09:04:04.445594 450843 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 09:04:04.456463 450843 cruntime.go:272] skipping containerd shutdown because we are bound to it I0221 09:04:04.456545 450843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0221 09:04:04.466270 450843 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0221 09:04:04.479108 450843 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0221 09:04:04.570245 450843 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0221 09:04:04.652126 450843 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 09:04:04.661792 450843 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0221 09:04:04.750226 450843 ssh_runner.go:195] Run: sudo systemctl start docker I0221 09:04:04.761194 450843 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 09:04:04.800783 450843 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 09:04:04.843188 450843 out.go:203] * Preparing Kubernetes v1.23.4 on Docker 20.10.12 ... I0221 09:04:04.843269 450843 cli_runner.go:133] Run: docker network inspect bridge-20220221084933-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 09:04:04.875779 450843 ssh_runner.go:195] Run: grep 192.168.67.1 host.minikube.internal$ /etc/hosts I0221 09:04:04.879091 450843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 09:04:04.890584 450843 out.go:176] - kubelet.housekeeping-interval=5m I0221 09:04:04.890666 450843 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 09:04:04.890718 450843 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0221 09:04:04.924400 450843 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.4 k8s.gcr.io/kube-proxy:v1.23.4 k8s.gcr.io/kube-controller-manager:v1.23.4 k8s.gcr.io/kube-scheduler:v1.23.4 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0221 09:04:04.924431 450843 docker.go:537] Images already preloaded, skipping extraction I0221 09:04:04.924490 450843 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0221 09:04:04.960777 450843 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.4 k8s.gcr.io/kube-proxy:v1.23.4 k8s.gcr.io/kube-scheduler:v1.23.4 k8s.gcr.io/kube-controller-manager:v1.23.4 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0221 09:04:04.960806 450843 cache_images.go:84] Images are preloaded, skipping loading I0221 09:04:04.960852 450843 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0221 09:04:05.053659 450843 cni.go:93] Creating CNI manager for "bridge" I0221 09:04:05.053700 450843 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0221 09:04:05.053725 450843 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-20220221084933-6550 NodeName:bridge-20220221084933-6550 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0221 09:04:05.053913 450843 kubeadm.go:162] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.67.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "bridge-20220221084933-6550" kubeletExtraArgs: node-ip: 192.168.67.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.67.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.23.4 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0221 09:04:05.054031 450843 kubeadm.go:936] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.23.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=bridge-20220221084933-6550 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 [Install] config: {KubernetesVersion:v1.23.4 ClusterName:bridge-20220221084933-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} I0221 09:04:05.054097 450843 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.4 I0221 09:04:05.061827 450843 binaries.go:44] Found k8s binaries, skipping transfer I0221 09:04:05.061904 450843 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0221 09:04:05.069705 450843 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (400 bytes) I0221 09:04:05.083281 450843 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0221 09:04:05.096800 450843 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes) I0221 09:04:05.111629 450843 ssh_runner.go:195] Run: grep 192.168.67.2 control-plane.minikube.internal$ /etc/hosts I0221 09:04:05.114792 450843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 09:04:05.124307 450843 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550 for IP: 192.168.67.2 I0221 09:04:05.124419 450843 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key I0221 09:04:05.124463 450843 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key I0221 09:04:05.124507 450843 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.key I0221 09:04:05.124520 450843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt with IP's: [] I0221 09:04:05.319892 450843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt ... I0221 09:04:05.319928 450843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: {Name:mk8cbb46271d42fb75fda4f65da2d7262d06ec86 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:04:05.320141 450843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.key ... I0221 09:04:05.320159 450843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.key: {Name:mkd13d656a2820a92f6d5b9d3905007effd80085 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:04:05.320271 450843 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/apiserver.key.c7fa3a9e I0221 09:04:05.320288 450843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1] I0221 09:04:05.618739 450843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/apiserver.crt.c7fa3a9e ... I0221 09:04:05.618772 450843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/apiserver.crt.c7fa3a9e: {Name:mka92eaa59d437c0a58d327ef573ac021dee9683 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:04:05.618979 450843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/apiserver.key.c7fa3a9e ... I0221 09:04:05.619039 450843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/apiserver.key.c7fa3a9e: {Name:mk723313c8c1c15643497cb4692d37cb78d49b5a Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:04:05.619161 450843 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/apiserver.crt I0221 09:04:05.619229 450843 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/apiserver.key I0221 09:04:05.619275 450843 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/proxy-client.key I0221 09:04:05.619289 450843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/proxy-client.crt with IP's: [] I0221 09:04:05.755263 450843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/proxy-client.crt ... I0221 09:04:05.755298 450843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/proxy-client.crt: {Name:mk0223fd1865dc442f565c7049baeaab60cc34f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:04:05.755491 450843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/proxy-client.key ... I0221 09:04:05.755509 450843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/proxy-client.key: {Name:mkbe785cfdb21e9c0948d8fa3b523861363916c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:04:05.755667 450843 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem (1338 bytes) W0221 09:04:05.755702 450843 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550_empty.pem, impossibly tiny 0 bytes I0221 09:04:05.755716 450843 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem (1679 bytes) I0221 09:04:05.755740 450843 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem (1082 bytes) I0221 09:04:05.755766 450843 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem (1123 bytes) I0221 09:04:05.755787 450843 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem (1675 bytes) I0221 09:04:05.755825 450843 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem (1708 bytes) I0221 09:04:05.756637 450843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0221 09:04:05.775376 450843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes) I0221 09:04:05.793336 450843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0221 09:04:05.810830 450843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I0221 09:04:05.828582 450843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0221 09:04:05.846167 450843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0221 09:04:05.864214 450843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0221 09:04:05.882412 450843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0221 09:04:05.900011 450843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /usr/share/ca-certificates/65502.pem (1708 bytes) I0221 09:04:05.919790 450843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0221 09:04:05.939351 450843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem --> /usr/share/ca-certificates/6550.pem (1338 bytes) I0221 09:04:05.957173 450843 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0221 09:04:05.970053 450843 ssh_runner.go:195] Run: openssl version I0221 09:04:05.975134 450843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65502.pem && ln -fs /usr/share/ca-certificates/65502.pem /etc/ssl/certs/65502.pem" I0221 09:04:05.982704 450843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65502.pem I0221 09:04:05.986765 450843 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Feb 21 08:30 /usr/share/ca-certificates/65502.pem I0221 09:04:05.986816 450843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65502.pem I0221 09:04:05.991765 450843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65502.pem /etc/ssl/certs/3ec20f2e.0" I0221 09:04:05.999292 450843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0221 09:04:06.006814 450843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0221 09:04:06.010099 450843 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb 21 08:26 /usr/share/ca-certificates/minikubeCA.pem I0221 09:04:06.010146 450843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0221 09:04:06.015300 450843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0221 09:04:06.023344 450843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6550.pem && ln -fs /usr/share/ca-certificates/6550.pem /etc/ssl/certs/6550.pem" I0221 09:04:06.031283 450843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6550.pem I0221 09:04:06.034681 450843 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Feb 21 08:30 /usr/share/ca-certificates/6550.pem I0221 09:04:06.034729 450843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6550.pem I0221 09:04:06.039625 450843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6550.pem /etc/ssl/certs/51391683.0" I0221 09:04:06.048486 450843 kubeadm.go:391] StartCluster: {Name:bridge-20220221084933-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:bridge-20220221084933-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 09:04:06.048635 450843 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0221 09:04:06.081455 450843 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0221 09:04:06.088719 450843 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0221 09:04:06.095804 450843 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver I0221 09:04:06.095857 450843 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0221 09:04:06.102652 450843 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0221 09:04:06.102690 450843 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0221 09:04:02.500237 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:03.000849 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:03.500336 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:04.000890 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:04.500907 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:05.000591 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:05.501215 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:06.001079 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:06.500383 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:07.000795 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:06.625853 450843 out.go:203] - Generating certificates and keys ... I0221 09:04:09.076334 450843 out.go:203] - Booting up control plane ... I0221 09:04:07.500614 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:08.000920 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:08.501012 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:09.000860 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:09.500890 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:10.000642 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:10.500378 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:11.000503 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:11.501273 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:12.000626 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:12.501284 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:13.000940 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:13.500277 442801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:13.714438 442801 kubeadm.go:1020] duration metric: took 12.576875325s to wait for elevateKubeSystemPrivileges. I0221 09:04:13.714474 442801 kubeadm.go:393] StartCluster complete in 27.983669732s I0221 09:04:13.714495 442801 settings.go:142] acquiring lock: {Name:mk4400923ef35d7d80e21aa000bc7683aef0fb09 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:04:13.714612 442801 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 09:04:13.716690 442801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig: {Name:mkbaf7b3f4ffbe8a9d57707d423380523fa909e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:} W0221 09:04:13.743696 442801 kapi.go:233] failed rescaling deployment, will retry: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again I0221 09:04:14.746909 442801 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "enable-default-cni-20220221084933-6550" rescaled to 1 I0221 09:04:14.747037 442801 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true} I0221 09:04:14.748728 442801 out.go:176] * Verifying Kubernetes components... I0221 09:04:14.747075 442801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0221 09:04:14.747094 442801 addons.go:415] enableAddons start: toEnable=map[], additional=[] I0221 09:04:14.748929 442801 addons.go:65] Setting storage-provisioner=true in profile "enable-default-cni-20220221084933-6550" I0221 09:04:14.748952 442801 addons.go:153] Setting addon storage-provisioner=true in "enable-default-cni-20220221084933-6550" W0221 09:04:14.748963 442801 addons.go:165] addon storage-provisioner should already be in state true I0221 09:04:14.748992 442801 host.go:66] Checking if "enable-default-cni-20220221084933-6550" exists ... I0221 09:04:14.749670 442801 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220221084933-6550 --format={{.State.Status}} I0221 09:04:14.747302 442801 config.go:176] Loaded profile config "enable-default-cni-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:04:14.748793 442801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0221 09:04:14.749834 442801 addons.go:65] Setting default-storageclass=true in profile "enable-default-cni-20220221084933-6550" I0221 09:04:14.749850 442801 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-20220221084933-6550" I0221 09:04:14.750123 442801 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220221084933-6550 --format={{.State.Status}} I0221 09:04:14.812896 442801 addons.go:153] Setting addon default-storageclass=true in "enable-default-cni-20220221084933-6550" W0221 09:04:14.812923 442801 addons.go:165] addon default-storageclass should already be in state true I0221 09:04:14.812954 442801 host.go:66] Checking if "enable-default-cni-20220221084933-6550" exists ... I0221 09:04:14.815855 442801 out.go:176] - Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0221 09:04:14.816096 442801 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml I0221 09:04:14.816112 442801 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0221 09:04:14.816168 442801 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220221084933-6550 I0221 09:04:14.813484 442801 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220221084933-6550 --format={{.State.Status}} I0221 09:04:14.854856 442801 node_ready.go:35] waiting up to 5m0s for node "enable-default-cni-20220221084933-6550" to be "Ready" ... I0221 09:04:14.855914 442801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.58.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0221 09:04:14.860755 442801 node_ready.go:49] node "enable-default-cni-20220221084933-6550" has status "Ready":"True" I0221 09:04:14.860778 442801 node_ready.go:38] duration metric: took 5.885081ms waiting for node "enable-default-cni-20220221084933-6550" to be "Ready" ... I0221 09:04:14.860789 442801 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 09:04:14.866764 442801 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml I0221 09:04:14.866793 442801 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0221 09:04:14.866843 442801 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220221084933-6550 I0221 09:04:14.873907 442801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49389 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/enable-default-cni-20220221084933-6550/id_rsa Username:docker} I0221 09:04:14.877501 442801 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-4pdmv" in "kube-system" namespace to be "Ready" ... I0221 09:04:14.914653 442801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49389 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/enable-default-cni-20220221084933-6550/id_rsa Username:docker} I0221 09:04:15.119543 442801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0221 09:04:15.120676 442801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0221 09:04:15.920017 442801 pod_ready.go:92] pod "coredns-64897985d-4pdmv" in "kube-system" namespace has status "Ready":"True" I0221 09:04:15.920046 442801 pod_ready.go:81] duration metric: took 1.042510787s waiting for pod "coredns-64897985d-4pdmv" in "kube-system" namespace to be "Ready" ... I0221 09:04:15.920062 442801 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-mr75l" in "kube-system" namespace to be "Ready" ... I0221 09:04:16.209420 442801 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.58.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.35347231s) I0221 09:04:16.209452 442801 start.go:777] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS I0221 09:04:16.210214 442801 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.090624607s) I0221 09:04:16.251854 442801 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.131142214s) I0221 09:04:16.620063 450843 out.go:203] - Configuring RBAC rules ... I0221 09:04:17.034273 450843 cni.go:93] Creating CNI manager for "bridge" I0221 09:04:16.253869 442801 out.go:176] * Enabled addons: default-storageclass, storage-provisioner I0221 09:04:16.253898 442801 addons.go:417] enableAddons completed in 1.506809944s I0221 09:04:17.036370 450843 out.go:176] * Configuring bridge CNI (Container Networking Interface) ... I0221 09:04:17.036445 450843 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d I0221 09:04:17.045026 450843 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes) I0221 09:04:17.102662 450843 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0221 09:04:17.102740 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=9a4c39b14752448ee8b5da7f8bb397c2a16d9ea9 minikube.k8s.io/name=bridge-20220221084933-6550 minikube.k8s.io/updated_at=2022_02_21T09_04_17_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:17.102740 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:17.205474 450843 ops.go:34] apiserver oom_adj: -16 I0221 09:04:17.548076 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:18.144207 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:18.644693 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:19.144213 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:19.644290 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:20.144831 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:20.644017 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:17.934661 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:04:19.937394 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:04:21.144510 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:21.644204 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:22.144205 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:22.644156 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:23.144049 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:23.644213 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:24.144756 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:24.644658 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:25.143988 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:25.643976 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:22.434245 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:04:24.935381 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:04:26.143995 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:26.644758 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:27.144380 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:27.643849 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:28.144209 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:28.644175 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:29.144225 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:29.644097 450843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:04:29.699085 450843 kubeadm.go:1020] duration metric: took 12.596412472s to wait for elevateKubeSystemPrivileges. I0221 09:04:29.699118 450843 kubeadm.go:393] StartCluster complete in 23.650643743s I0221 09:04:29.699139 450843 settings.go:142] acquiring lock: {Name:mk4400923ef35d7d80e21aa000bc7683aef0fb09 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:04:29.699242 450843 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 09:04:29.700907 450843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig: {Name:mkbaf7b3f4ffbe8a9d57707d423380523fa909e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:04:30.220009 450843 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "bridge-20220221084933-6550" rescaled to 1 I0221 09:04:30.220105 450843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0221 09:04:30.220120 450843 addons.go:415] enableAddons start: toEnable=map[], additional=[] I0221 09:04:30.220167 450843 addons.go:65] Setting storage-provisioner=true in profile "bridge-20220221084933-6550" I0221 09:04:30.220185 450843 addons.go:153] Setting addon storage-provisioner=true in "bridge-20220221084933-6550" W0221 09:04:30.220199 450843 addons.go:165] addon storage-provisioner should already be in state true I0221 09:04:30.220225 450843 host.go:66] Checking if "bridge-20220221084933-6550" exists ... I0221 09:04:30.220418 450843 config.go:176] Loaded profile config "bridge-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:04:30.220469 450843 addons.go:65] Setting default-storageclass=true in profile "bridge-20220221084933-6550" I0221 09:04:30.220481 450843 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-20220221084933-6550" I0221 09:04:30.220735 450843 cli_runner.go:133] Run: docker container inspect bridge-20220221084933-6550 --format={{.State.Status}} I0221 09:04:30.220735 450843 cli_runner.go:133] Run: docker container inspect bridge-20220221084933-6550 --format={{.State.Status}} I0221 09:04:30.220094 450843 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true} I0221 09:04:30.223111 450843 out.go:176] * Verifying Kubernetes components... I0221 09:04:30.223190 450843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0221 09:04:30.266915 450843 out.go:176] - Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0221 09:04:30.267101 450843 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml I0221 09:04:30.267118 450843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0221 09:04:30.267155 450843 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220221084933-6550 I0221 09:04:30.301955 450843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49394 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/bridge-20220221084933-6550/id_rsa Username:docker} I0221 09:04:30.305286 450843 addons.go:153] Setting addon default-storageclass=true in "bridge-20220221084933-6550" W0221 09:04:30.305320 450843 addons.go:165] addon default-storageclass should already be in state true I0221 09:04:30.305354 450843 host.go:66] Checking if "bridge-20220221084933-6550" exists ... I0221 09:04:30.305864 450843 cli_runner.go:133] Run: docker container inspect bridge-20220221084933-6550 --format={{.State.Status}} I0221 09:04:30.354025 450843 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml I0221 09:04:30.354053 450843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0221 09:04:30.354110 450843 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220221084933-6550 I0221 09:04:30.388147 450843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49394 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/bridge-20220221084933-6550/id_rsa Username:docker} I0221 09:04:30.423881 450843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.67.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0221 09:04:30.427457 450843 node_ready.go:35] waiting up to 5m0s for node "bridge-20220221084933-6550" to be "Ready" ... I0221 09:04:30.432136 450843 node_ready.go:49] node "bridge-20220221084933-6550" has status "Ready":"True" I0221 09:04:30.432196 450843 node_ready.go:38] duration metric: took 4.664633ms waiting for node "bridge-20220221084933-6550" to be "Ready" ... I0221 09:04:30.432212 450843 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 09:04:30.441872 450843 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-7jshp" in "kube-system" namespace to be "Ready" ... I0221 09:04:30.521878 450843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0221 09:04:30.527474 450843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0221 09:04:31.733720 450843 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.67.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.309793379s) I0221 09:04:31.733768 450843 start.go:777] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS I0221 09:04:31.839340 450843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.31737173s) I0221 09:04:31.839463 450843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.311959107s) I0221 09:04:27.435159 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:04:29.936608 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:04:31.841123 450843 out.go:176] * Enabled addons: storage-provisioner, default-storageclass I0221 09:04:31.841211 450843 addons.go:417] enableAddons completed in 1.62109441s I0221 09:04:32.455774 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:04:34.955583 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:04:32.435115 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:04:34.935474 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:04:36.935912 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:04:36.956049 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:04:39.456093 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:04:38.936067 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:04:41.434307 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:04:41.456541 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:04:43.955832 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:04:43.436111 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:04:45.934096 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:04:46.456741 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:04:48.956151 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:04:47.935259 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:04:50.434755 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:04:51.455727 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:04:53.955094 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:04:55.955440 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:04:52.934820 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:04:54.937852 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:04:57.956232 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:04:59.956365 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:04:57.435035 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:04:59.934049 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:01.934530 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:02.455268 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:04.455740 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:03.935138 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:06.434776 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:06.456348 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:08.956186 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:08.935346 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:10.935572 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:11.455698 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:13.455791 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:15.456360 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:13.433982 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:15.434465 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:17.956930 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:20.455598 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:17.434688 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:19.935355 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:22.955384 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:24.956304 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:22.435376 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:24.935574 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:27.456347 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:29.457022 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:27.434317 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:29.435822 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:31.935306 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:31.956186 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:34.455982 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:34.434660 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:36.435252 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:36.956407 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:38.956481 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:38.935022 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:41.434276 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:41.455912 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:43.955728 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:43.434849 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:45.435045 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:46.455771 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:48.456125 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:50.955535 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:47.435419 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:49.934762 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:51.935246 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:53.455757 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:55.955989 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:54.435110 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:56.934967 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:05:58.455600 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:00.456412 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:05:58.935863 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:01.435228 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:02.956159 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:05.456333 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:03.435332 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:05.934148 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:07.955776 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:10.456294 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:07.934512 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:09.935280 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:12.955819 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:15.456281 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:12.434971 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:14.935453 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:16.935799 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:17.955309 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:19.955885 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:19.434977 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:21.934868 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:21.955949 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:24.456143 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:23.935521 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:26.435200 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:26.955294 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:29.455254 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:28.934695 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:30.935605 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:31.955457 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:33.955912 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:35.956083 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:33.434314 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:35.435174 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:38.456121 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:40.456245 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:37.935547 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:40.434317 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:42.955236 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:44.955806 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:42.434719 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:44.435481 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:46.934690 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:47.455860 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:49.457435 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:49.434637 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:51.935545 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:51.955645 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:53.955837 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:55.955962 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:54.434212 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:56.435042 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:06:58.456029 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:00.956576 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:06:58.934726 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:00.936082 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:03.455781 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:05.955666 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:02.936505 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:05.435785 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:07.955926 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:10.456454 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:07.934506 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:09.934869 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:11.935083 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:12.956960 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:15.456313 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:13.935264 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:16.434803 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" * * ==> Docker <== * -- Logs begin at Mon 2022-02-21 08:57:00 UTC, end at Mon 2022-02-21 09:07:21 UTC. -- Feb 21 08:57:02 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T08:57:02.666768688Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 21 08:57:02 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T08:57:02.666795906Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 21 08:57:02 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T08:57:02.666814149Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Feb 21 08:57:02 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T08:57:02.666822586Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 21 08:57:02 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T08:57:02.670743564Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Feb 21 08:57:02 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T08:57:02.676732207Z" level=warning msg="Your kernel does not support CPU realtime scheduler" Feb 21 08:57:02 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T08:57:02.676756014Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 21 08:57:02 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T08:57:02.676761700Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 21 08:57:02 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T08:57:02.676921956Z" level=info msg="Loading containers: start." Feb 21 08:57:02 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T08:57:02.768531671Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 21 08:57:02 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T08:57:02.805140931Z" level=info msg="Loading containers: done." Feb 21 08:57:02 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T08:57:02.825275313Z" level=info msg="Docker daemon" commit=459d0df graphdriver(s)=overlay2 version=20.10.12 Feb 21 08:57:02 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T08:57:02.825342305Z" level=info msg="Daemon has completed initialization" Feb 21 08:57:02 auto-20220221084933-6550 systemd[1]: Started Docker Application Container Engine. Feb 21 08:57:02 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T08:57:02.844491195Z" level=info msg="API listen on [::]:2376" Feb 21 08:57:02 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T08:57:02.850635107Z" level=info msg="API listen on /var/run/docker.sock" Feb 21 08:57:43 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T08:57:43.342083627Z" level=info msg="ignoring event" container=0c459eb8fed84d243a28367e4c6028d00b83be6a1b9ceb50262498a6589c186c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 08:57:43 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T08:57:43.407322662Z" level=info msg="ignoring event" container=f47bad55ea0449b1b8d785312c064d318e410699acf2b083fa64672b4050538d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 08:58:04 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T08:58:04.588382338Z" level=info msg="ignoring event" container=70f6c474ffc9d742d5078efd920a71d49d8b5f63e6ef155b915f2b7be6a7b31a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 08:58:35 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T08:58:35.573240058Z" level=info msg="ignoring event" container=fc5f64a664c235a0ed09411bff0370c4cb20ea225e43d0dca8547983013e4b46 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 08:59:17 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T08:59:17.690530130Z" level=info msg="ignoring event" container=28664154f0a61332a8c7e00f53457bc0cd85d7502285b9b6f234a56fe501be70 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:00:11 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T09:00:11.696415101Z" level=info msg="ignoring event" container=197c1336a22eab95236acd198fce43de13c1b0584fa184af45e5e69609ade3d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:01:30 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T09:01:30.689070641Z" level=info msg="ignoring event" container=1cd0b722c1ad179fbe04eec47fa9672f60c6ce42361d20956d75d80fafb850cc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:03:28 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T09:03:28.729961272Z" level=info msg="ignoring event" container=eb7ea3e7bcfd9ed6e4cfecafe560ef806c60bf5af7c2673497a39a09f6455dac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:06:43 auto-20220221084933-6550 dockerd[460]: time="2022-02-21T09:06:43.681981932Z" level=info msg="ignoring event" container=88e3a5d7acafa1029dcb1e4d17a83105230bce1b9394bc49b9a690cef8400e2b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 88e3a5d7acafa 6e38f40d628db About a minute ago Exited storage-provisioner 6 effeeb1480903 b7624aca6f588 k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 5 minutes ago Running dnsutils 0 00e602440bce3 9ec110d5717f1 a4ca41631cc7a 9 minutes ago Running coredns 0 42d636d8f7715 76924ebff8388 2114245ec4d6b 9 minutes ago Running kube-proxy 0 d10927a13c2b7 b23ee2bbc19da 25f8c7f3da61c 10 minutes ago Running etcd 0 d92c7b63f2668 0bb1b94ca5a9e 25444908517a5 10 minutes ago Running kube-controller-manager 0 aca06434b9eef c78588822ac6e aceacb6244f9f 10 minutes ago Running kube-scheduler 0 7b143332b596c ee44803ab83a5 62930710c9634 10 minutes ago Running kube-apiserver 0 31a5d6a981fd8 * * ==> coredns [9ec110d5717f] <== * [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" * * ==> describe nodes <== * Name: auto-20220221084933-6550 Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=auto-20220221084933-6550 kubernetes.io/os=linux minikube.k8s.io/commit=9a4c39b14752448ee8b5da7f8bb397c2a16d9ea9 minikube.k8s.io/name=auto-20220221084933-6550 minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2022_02_21T08_57_18_0700 minikube.k8s.io/version=v1.25.1 node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 21 Feb 2022 08:57:15 +0000 Taints: Unschedulable: false Lease: HolderIdentity: auto-20220221084933-6550 AcquireTime: RenewTime: Mon, 21 Feb 2022 09:07:20 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Mon, 21 Feb 2022 09:02:24 +0000 Mon, 21 Feb 2022 08:57:11 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Mon, 21 Feb 2022 09:02:24 +0000 Mon, 21 Feb 2022 08:57:11 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Mon, 21 Feb 2022 09:02:24 +0000 Mon, 21 Feb 2022 08:57:11 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Mon, 21 Feb 2022 09:02:24 +0000 Mon, 21 Feb 2022 08:57:28 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.76.2 Hostname: auto-20220221084933-6550 Capacity: cpu: 8 ephemeral-storage: 304695084Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32874648Ki pods: 110 Allocatable: cpu: 8 ephemeral-storage: 304695084Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32874648Ki pods: 110 System Info: Machine ID: b6a262faae404a5db719705fd34b5c8b System UUID: 6245a238-b599-4ae4-881d-541b5f730f40 Boot ID: 36f9c729-2a96-4807-bb74-314dc2113999 Kernel Version: 5.11.0-1029-gcp OS Image: Ubuntu 20.04.2 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.12 Kubelet Version: v1.23.4 Kube-Proxy Version: v1.23.4 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (8 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- default netcat-668db85669-v8bk5 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m35s kube-system coredns-64897985d-rg6k7 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 9m50s kube-system etcd-auto-20220221084933-6550 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 10m kube-system kube-apiserver-auto-20220221084933-6550 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 10m kube-system kube-controller-manager-auto-20220221084933-6550 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 10m kube-system kube-proxy-j6t4r 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9m50s kube-system kube-scheduler-auto-20220221084933-6550 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 10m kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9m48s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 750m (9%!)(MISSING) 0 (0%!)(MISSING) memory 170Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 9m48s kube-proxy Normal NodeHasSufficientMemory 10m kubelet Node auto-20220221084933-6550 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 10m kubelet Node auto-20220221084933-6550 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 10m kubelet Node auto-20220221084933-6550 status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 10m kubelet Updated Node Allocatable limit across pods Normal Starting 10m kubelet Starting kubelet. Normal NodeReady 9m53s kubelet Node auto-20220221084933-6550 status is now: NodeReady * * ==> dmesg <== * [ +0.000008] ll header: 00000000: 02 42 58 0b cb 43 02 42 c0 a8 31 02 08 00 [ +11.606902] IPv4: martian source 10.244.0.223 from 10.244.0.3, on dev br-5d96ab4d6b1a [ +0.000006] ll header: 00000000: 02 42 58 0b cb 43 02 42 c0 a8 31 02 08 00 [ +4.995903] IPv4: martian source 10.244.0.223 from 10.244.0.3, on dev br-5d96ab4d6b1a [ +0.000008] ll header: 00000000: 02 42 58 0b cb 43 02 42 c0 a8 31 02 08 00 [ +4.999615] IPv4: martian source 10.244.0.223 from 10.244.0.3, on dev br-5d96ab4d6b1a [ +0.000007] ll header: 00000000: 02 42 58 0b cb 43 02 42 c0 a8 31 02 08 00 [ +11.726696] IPv4: martian source 10.244.0.223 from 10.244.0.3, on dev br-5d96ab4d6b1a [ +0.000006] ll header: 00000000: 02 42 58 0b cb 43 02 42 c0 a8 31 02 08 00 [ +4.996095] IPv4: martian source 10.244.0.223 from 10.244.0.3, on dev br-5d96ab4d6b1a [ +0.000007] ll header: 00000000: 02 42 58 0b cb 43 02 42 c0 a8 31 02 08 00 [ +4.999665] IPv4: martian source 10.244.0.223 from 10.244.0.3, on dev br-5d96ab4d6b1a [ +0.000007] ll header: 00000000: 02 42 58 0b cb 43 02 42 c0 a8 31 02 08 00 [Feb21 09:06] IPv4: martian source 10.244.0.223 from 10.244.0.3, on dev br-5d96ab4d6b1a [ +0.000007] ll header: 00000000: 02 42 58 0b cb 43 02 42 c0 a8 31 02 08 00 [ +4.995939] IPv4: martian source 10.244.0.223 from 10.244.0.3, on dev br-5d96ab4d6b1a [ +0.000007] ll header: 00000000: 02 42 58 0b cb 43 02 42 c0 a8 31 02 08 00 [ +5.003671] IPv4: martian source 10.244.0.223 from 10.244.0.3, on dev br-5d96ab4d6b1a [ +0.000008] ll header: 00000000: 02 42 58 0b cb 43 02 42 c0 a8 31 02 08 00 [ +25.459126] IPv4: martian source 10.244.0.223 from 10.244.0.3, on dev br-5d96ab4d6b1a [ +0.000007] ll header: 00000000: 02 42 58 0b cb 43 02 42 c0 a8 31 02 08 00 [ +4.998672] IPv4: martian source 10.244.0.223 from 10.244.0.3, on dev br-5d96ab4d6b1a [ +0.000006] ll header: 00000000: 02 42 58 0b cb 43 02 42 c0 a8 31 02 08 00 [ +4.999618] IPv4: martian source 10.244.0.223 from 10.244.0.3, on dev br-5d96ab4d6b1a [ +0.000007] ll header: 00000000: 02 42 58 0b cb 43 02 42 c0 a8 31 02 08 00 * * ==> etcd [b23ee2bbc19d] <== * {"level":"info","ts":"2022-02-21T08:57:12.406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"} {"level":"info","ts":"2022-02-21T08:57:12.406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"} {"level":"info","ts":"2022-02-21T08:57:12.406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"} {"level":"info","ts":"2022-02-21T08:57:12.406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"} {"level":"info","ts":"2022-02-21T08:57:12.406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"} {"level":"info","ts":"2022-02-21T08:57:12.407Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T08:57:12.407Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T08:57:12.407Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T08:57:12.407Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T08:57:12.407Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:auto-20220221084933-6550 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"} {"level":"info","ts":"2022-02-21T08:57:12.407Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2022-02-21T08:57:12.407Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2022-02-21T08:57:12.408Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"} {"level":"info","ts":"2022-02-21T08:57:12.408Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"} {"level":"info","ts":"2022-02-21T08:57:12.409Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"} {"level":"info","ts":"2022-02-21T08:57:12.409Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"} {"level":"info","ts":"2022-02-21T09:03:36.569Z","caller":"traceutil/trace.go:171","msg":"trace[55828461] transaction","detail":"{read_only:false; response_revision:644; number_of_response:1; }","duration":"116.088575ms","start":"2022-02-21T09:03:36.453Z","end":"2022-02-21T09:03:36.569Z","steps":["trace[55828461] 'process raft request' (duration: 113.749116ms)"],"step_count":1} {"level":"info","ts":"2022-02-21T09:03:56.662Z","caller":"traceutil/trace.go:171","msg":"trace[1655245063] transaction","detail":"{read_only:false; response_revision:649; number_of_response:1; }","duration":"111.010424ms","start":"2022-02-21T09:03:56.551Z","end":"2022-02-21T09:03:56.662Z","steps":["trace[1655245063] 'process raft request' (duration: 13.130369ms)","trace[1655245063] 'compare' (duration: 97.781597ms)"],"step_count":2} {"level":"warn","ts":"2022-02-21T09:03:56.662Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"193.098376ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:422"} {"level":"info","ts":"2022-02-21T09:03:56.662Z","caller":"traceutil/trace.go:171","msg":"trace[1374387149] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:648; }","duration":"193.270999ms","start":"2022-02-21T09:03:56.469Z","end":"2022-02-21T09:03:56.662Z","steps":["trace[1374387149] 'agreement among raft nodes before linearized reading' (duration: 95.291053ms)","trace[1374387149] 'range keys from in-memory index tree' (duration: 97.769916ms)"],"step_count":2} {"level":"info","ts":"2022-02-21T09:03:56.779Z","caller":"traceutil/trace.go:171","msg":"trace[1954821084] linearizableReadLoop","detail":"{readStateIndex:744; appliedIndex:744; }","duration":"114.80934ms","start":"2022-02-21T09:03:56.664Z","end":"2022-02-21T09:03:56.779Z","steps":["trace[1954821084] 'read index received' (duration: 114.79616ms)","trace[1954821084] 'applied index is now lower than readState.Index' (duration: 11.307µs)"],"step_count":2} {"level":"warn","ts":"2022-02-21T09:03:56.878Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"214.207147ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" ","response":"range_response_count:1 size:133"} {"level":"info","ts":"2022-02-21T09:03:56.878Z","caller":"traceutil/trace.go:171","msg":"trace[679182698] range","detail":"{range_begin:/registry/masterleases/; range_end:/registry/masterleases0; response_count:1; response_revision:649; }","duration":"214.301324ms","start":"2022-02-21T09:03:56.664Z","end":"2022-02-21T09:03:56.878Z","steps":["trace[679182698] 'agreement among raft nodes before linearized reading' (duration: 114.936027ms)","trace[679182698] 'range keys from in-memory index tree' (duration: 99.227689ms)"],"step_count":2} {"level":"info","ts":"2022-02-21T09:07:12.424Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":621} {"level":"info","ts":"2022-02-21T09:07:12.425Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":621,"took":"587.592µs"} * * ==> kernel <== * 09:07:21 up 49 min, 0 users, load average: 0.88, 2.77, 3.13 Linux auto-20220221084933-6550 5.11.0-1029-gcp #33~20.04.3-Ubuntu SMP Tue Jan 18 12:03:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.2 LTS" * * ==> kube-apiserver [ee44803ab83a] <== * I0221 08:57:14.932266 1 shared_informer.go:247] Caches are synced for node_authorizer I0221 08:57:14.947237 1 apf_controller.go:322] Running API Priority and Fairness config worker I0221 08:57:14.951403 1 cache.go:39] Caches are synced for AvailableConditionController controller I0221 08:57:14.952540 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller I0221 08:57:14.953486 1 shared_informer.go:247] Caches are synced for crd-autoregister I0221 08:57:15.845870 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0221 08:57:15.852269 1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000 I0221 08:57:15.854707 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0221 08:57:15.856109 1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000 I0221 08:57:15.856133 1 storage_scheduling.go:109] all system priority classes are created successfully or already exist. I0221 08:57:16.325532 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0221 08:57:16.364060 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0221 08:57:16.436271 1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1] W0221 08:57:16.441781 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2] I0221 08:57:16.442808 1 controller.go:611] quota admission added evaluator for: endpoints I0221 08:57:16.446390 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io I0221 08:57:16.983217 1 controller.go:611] quota admission added evaluator for: serviceaccounts I0221 08:57:18.042389 1 controller.go:611] quota admission added evaluator for: deployments.apps I0221 08:57:18.049860 1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10] I0221 08:57:18.062188 1 controller.go:611] quota admission added evaluator for: daemonsets.apps I0221 08:57:18.344341 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io I0221 08:57:30.622969 1 controller.go:611] quota admission added evaluator for: replicasets.apps I0221 08:57:31.608959 1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps I0221 08:57:32.936691 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io I0221 09:01:46.059129 1 alloc.go:329] "allocated clusterIPs" service="default/netcat" clusterIPs=map[IPv4:10.110.232.80] * * ==> kube-controller-manager [0bb1b94ca5a9] <== * I0221 08:57:30.727273 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator I0221 08:57:30.727289 1 shared_informer.go:247] Caches are synced for cidrallocator I0221 08:57:30.733960 1 range_allocator.go:374] Set node auto-20220221084933-6550 PodCIDR to [10.244.0.0/24] I0221 08:57:30.744945 1 shared_informer.go:247] Caches are synced for endpoint_slice I0221 08:57:30.768858 1 shared_informer.go:247] Caches are synced for persistent volume I0221 08:57:30.768854 1 shared_informer.go:247] Caches are synced for attach detach I0221 08:57:30.768874 1 shared_informer.go:247] Caches are synced for TTL I0221 08:57:30.785681 1 shared_informer.go:247] Caches are synced for daemon sets I0221 08:57:30.818241 1 shared_informer.go:247] Caches are synced for GC I0221 08:57:30.823685 1 shared_informer.go:247] Caches are synced for taint I0221 08:57:30.823827 1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: I0221 08:57:30.823901 1 taint_manager.go:187] "Starting NoExecuteTaintManager" W0221 08:57:30.823933 1 node_lifecycle_controller.go:1012] Missing timestamp for Node auto-20220221084933-6550. Assuming now as a timestamp. I0221 08:57:30.823990 1 node_lifecycle_controller.go:1213] Controller detected that zone is now in state Normal. I0221 08:57:30.824086 1 event.go:294] "Event occurred" object="auto-20220221084933-6550" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node auto-20220221084933-6550 event: Registered Node auto-20220221084933-6550 in Controller" I0221 08:57:31.188640 1 shared_informer.go:247] Caches are synced for garbage collector I0221 08:57:31.188666 1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0221 08:57:31.210469 1 shared_informer.go:247] Caches are synced for garbage collector I0221 08:57:31.429366 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-6wgl9" I0221 08:57:31.436890 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-rg6k7" I0221 08:57:31.558096 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1" I0221 08:57:31.605840 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-6wgl9" I0221 08:57:31.615476 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-j6t4r" I0221 09:01:46.079752 1 event.go:294] "Event occurred" object="default/netcat" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set netcat-668db85669 to 1" I0221 09:01:46.088592 1 event.go:294] "Event occurred" object="default/netcat-668db85669" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: netcat-668db85669-v8bk5" * * ==> kube-proxy [76924ebff838] <== * I0221 08:57:32.804263 1 node.go:163] Successfully retrieved node IP: 192.168.76.2 I0221 08:57:32.804361 1 server_others.go:138] "Detected node IP" address="192.168.76.2" I0221 08:57:32.804399 1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode="" I0221 08:57:32.910626 1 server_others.go:206] "Using iptables Proxier" I0221 08:57:32.910827 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4 I0221 08:57:32.910952 1 server_others.go:214] "Creating dualStackProxier for iptables" I0221 08:57:32.911065 1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6" I0221 08:57:32.911574 1 server.go:656] "Version info" version="v1.23.4" I0221 08:57:32.916913 1 config.go:317] "Starting service config controller" I0221 08:57:32.916951 1 shared_informer.go:240] Waiting for caches to sync for service config I0221 08:57:32.933406 1 config.go:226] "Starting endpoint slice config controller" I0221 08:57:32.934316 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I0221 08:57:33.017817 1 shared_informer.go:247] Caches are synced for service config I0221 08:57:33.035222 1 shared_informer.go:247] Caches are synced for endpoint slice config * * ==> kube-scheduler [c78588822ac6] <== * E0221 08:57:14.931862 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0221 08:57:14.931721 1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0221 08:57:14.932362 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0221 08:57:15.757399 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0221 08:57:15.757446 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0221 08:57:15.834131 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0221 08:57:15.834178 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0221 08:57:15.868186 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0221 08:57:15.868218 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0221 08:57:15.898542 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0221 08:57:15.898573 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0221 08:57:16.026736 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0221 08:57:16.026774 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0221 08:57:16.096389 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0221 08:57:16.096418 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0221 08:57:16.114163 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0221 08:57:16.114197 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0221 08:57:16.115203 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0221 08:57:16.115239 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0221 08:57:16.327598 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system" W0221 08:57:16.339522 1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0221 08:57:16.339559 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0221 08:57:17.123087 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system" E0221 08:57:17.415353 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system" I0221 08:57:18.921914 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Mon 2022-02-21 08:57:00 UTC, end at Mon 2022-02-21 09:07:22 UTC. -- Feb 21 09:04:21 auto-20220221084933-6550 kubelet[2016]: E0221 09:04:21.536776 2016 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(cb2b449c-788d-4efb-9f51-1de24e609c8b)\"" pod="kube-system/storage-provisioner" podUID=cb2b449c-788d-4efb-9f51-1de24e609c8b Feb 21 09:04:32 auto-20220221084933-6550 kubelet[2016]: I0221 09:04:32.536644 2016 scope.go:110] "RemoveContainer" containerID="eb7ea3e7bcfd9ed6e4cfecafe560ef806c60bf5af7c2673497a39a09f6455dac" Feb 21 09:04:32 auto-20220221084933-6550 kubelet[2016]: E0221 09:04:32.536889 2016 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(cb2b449c-788d-4efb-9f51-1de24e609c8b)\"" pod="kube-system/storage-provisioner" podUID=cb2b449c-788d-4efb-9f51-1de24e609c8b Feb 21 09:04:46 auto-20220221084933-6550 kubelet[2016]: I0221 09:04:46.536596 2016 scope.go:110] "RemoveContainer" containerID="eb7ea3e7bcfd9ed6e4cfecafe560ef806c60bf5af7c2673497a39a09f6455dac" Feb 21 09:04:46 auto-20220221084933-6550 kubelet[2016]: E0221 09:04:46.536816 2016 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(cb2b449c-788d-4efb-9f51-1de24e609c8b)\"" pod="kube-system/storage-provisioner" podUID=cb2b449c-788d-4efb-9f51-1de24e609c8b Feb 21 09:04:58 auto-20220221084933-6550 kubelet[2016]: I0221 09:04:58.536322 2016 scope.go:110] "RemoveContainer" containerID="eb7ea3e7bcfd9ed6e4cfecafe560ef806c60bf5af7c2673497a39a09f6455dac" Feb 21 09:04:58 auto-20220221084933-6550 kubelet[2016]: E0221 09:04:58.536530 2016 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(cb2b449c-788d-4efb-9f51-1de24e609c8b)\"" pod="kube-system/storage-provisioner" podUID=cb2b449c-788d-4efb-9f51-1de24e609c8b Feb 21 09:05:12 auto-20220221084933-6550 kubelet[2016]: I0221 09:05:12.536010 2016 scope.go:110] "RemoveContainer" containerID="eb7ea3e7bcfd9ed6e4cfecafe560ef806c60bf5af7c2673497a39a09f6455dac" Feb 21 09:05:12 auto-20220221084933-6550 kubelet[2016]: E0221 09:05:12.536215 2016 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(cb2b449c-788d-4efb-9f51-1de24e609c8b)\"" pod="kube-system/storage-provisioner" podUID=cb2b449c-788d-4efb-9f51-1de24e609c8b Feb 21 09:05:25 auto-20220221084933-6550 kubelet[2016]: I0221 09:05:25.536394 2016 scope.go:110] "RemoveContainer" containerID="eb7ea3e7bcfd9ed6e4cfecafe560ef806c60bf5af7c2673497a39a09f6455dac" Feb 21 09:05:25 auto-20220221084933-6550 kubelet[2016]: E0221 09:05:25.536660 2016 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(cb2b449c-788d-4efb-9f51-1de24e609c8b)\"" pod="kube-system/storage-provisioner" podUID=cb2b449c-788d-4efb-9f51-1de24e609c8b Feb 21 09:05:38 auto-20220221084933-6550 kubelet[2016]: I0221 09:05:38.536059 2016 scope.go:110] "RemoveContainer" containerID="eb7ea3e7bcfd9ed6e4cfecafe560ef806c60bf5af7c2673497a39a09f6455dac" Feb 21 09:05:38 auto-20220221084933-6550 kubelet[2016]: E0221 09:05:38.536255 2016 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(cb2b449c-788d-4efb-9f51-1de24e609c8b)\"" pod="kube-system/storage-provisioner" podUID=cb2b449c-788d-4efb-9f51-1de24e609c8b Feb 21 09:05:49 auto-20220221084933-6550 kubelet[2016]: I0221 09:05:49.535615 2016 scope.go:110] "RemoveContainer" containerID="eb7ea3e7bcfd9ed6e4cfecafe560ef806c60bf5af7c2673497a39a09f6455dac" Feb 21 09:05:49 auto-20220221084933-6550 kubelet[2016]: E0221 09:05:49.535838 2016 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(cb2b449c-788d-4efb-9f51-1de24e609c8b)\"" pod="kube-system/storage-provisioner" podUID=cb2b449c-788d-4efb-9f51-1de24e609c8b Feb 21 09:06:00 auto-20220221084933-6550 kubelet[2016]: I0221 09:06:00.536189 2016 scope.go:110] "RemoveContainer" containerID="eb7ea3e7bcfd9ed6e4cfecafe560ef806c60bf5af7c2673497a39a09f6455dac" Feb 21 09:06:00 auto-20220221084933-6550 kubelet[2016]: E0221 09:06:00.536424 2016 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(cb2b449c-788d-4efb-9f51-1de24e609c8b)\"" pod="kube-system/storage-provisioner" podUID=cb2b449c-788d-4efb-9f51-1de24e609c8b Feb 21 09:06:13 auto-20220221084933-6550 kubelet[2016]: I0221 09:06:13.536093 2016 scope.go:110] "RemoveContainer" containerID="eb7ea3e7bcfd9ed6e4cfecafe560ef806c60bf5af7c2673497a39a09f6455dac" Feb 21 09:06:44 auto-20220221084933-6550 kubelet[2016]: I0221 09:06:44.461250 2016 scope.go:110] "RemoveContainer" containerID="eb7ea3e7bcfd9ed6e4cfecafe560ef806c60bf5af7c2673497a39a09f6455dac" Feb 21 09:06:44 auto-20220221084933-6550 kubelet[2016]: I0221 09:06:44.461561 2016 scope.go:110] "RemoveContainer" containerID="88e3a5d7acafa1029dcb1e4d17a83105230bce1b9394bc49b9a690cef8400e2b" Feb 21 09:06:44 auto-20220221084933-6550 kubelet[2016]: E0221 09:06:44.461786 2016 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(cb2b449c-788d-4efb-9f51-1de24e609c8b)\"" pod="kube-system/storage-provisioner" podUID=cb2b449c-788d-4efb-9f51-1de24e609c8b Feb 21 09:06:56 auto-20220221084933-6550 kubelet[2016]: I0221 09:06:56.535695 2016 scope.go:110] "RemoveContainer" containerID="88e3a5d7acafa1029dcb1e4d17a83105230bce1b9394bc49b9a690cef8400e2b" Feb 21 09:06:56 auto-20220221084933-6550 kubelet[2016]: E0221 09:06:56.535984 2016 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(cb2b449c-788d-4efb-9f51-1de24e609c8b)\"" pod="kube-system/storage-provisioner" podUID=cb2b449c-788d-4efb-9f51-1de24e609c8b Feb 21 09:07:09 auto-20220221084933-6550 kubelet[2016]: I0221 09:07:09.535800 2016 scope.go:110] "RemoveContainer" containerID="88e3a5d7acafa1029dcb1e4d17a83105230bce1b9394bc49b9a690cef8400e2b" Feb 21 09:07:09 auto-20220221084933-6550 kubelet[2016]: E0221 09:07:09.536018 2016 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(cb2b449c-788d-4efb-9f51-1de24e609c8b)\"" pod="kube-system/storage-provisioner" podUID=cb2b449c-788d-4efb-9f51-1de24e609c8b * * ==> storage-provisioner [88e3a5d7acaf] <== * I0221 09:06:13.663212 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... F0221 09:06:43.666342 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout -- /stdout -- helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p auto-20220221084933-6550 -n auto-20220221084933-6550 helpers_test.go:262: (dbg) Run: kubectl --context auto-20220221084933-6550 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running helpers_test.go:271: non-running pods: helpers_test.go:273: ======> post-mortem[TestNetworkPlugins/group/auto]: describe non-running pods <====== helpers_test.go:276: (dbg) Run: kubectl --context auto-20220221084933-6550 describe pod helpers_test.go:276: (dbg) Non-zero exit: kubectl --context auto-20220221084933-6550 describe pod : exit status 1 (40.092807ms) ** stderr ** error: resource name may not be empty ** /stderr ** helpers_test.go:278: kubectl --context auto-20220221084933-6550 describe pod : exit status 1 helpers_test.go:176: Cleaning up "auto-20220221084933-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p auto-20220221084933-6550 helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p auto-20220221084933-6550: (2.670258695s) --- FAIL: TestNetworkPlugins/group/auto (836.10s) === FAIL: . TestNetworkPlugins/group/kindnet/DNS (352.09s) net_test.go:163: (dbg) Run: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.200766854s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.148465058s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.140488104s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.136630445s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.148676284s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.129512436s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 09:05:38.483426 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory net_test.go:163: (dbg) Run: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.128902243s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.12772282s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 09:06:21.489618 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory E0221 09:06:26.610648 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory E0221 09:06:32.220908 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory E0221 09:06:36.851605 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory net_test.go:163: (dbg) Run: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.137517782s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 09:06:57.332109 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory net_test.go:163: (dbg) Run: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default E0221 09:07:30.569068 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory E0221 09:07:38.292715 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.151937061s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.14103386s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default E0221 09:09:33.149088 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.253102507s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1 net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"* --- FAIL: TestNetworkPlugins/group/kindnet/DNS (352.09s) === FAIL: . TestNetworkPlugins/group/kindnet (422.20s) net_test.go:198: "kindnet" test finished in 20m9.755113612s, failed=true net_test.go:199: *** TestNetworkPlugins/group/kindnet FAILED at 2022-02-21 09:09:43.756287859 +0000 UTC m=+2676.518607461 helpers_test.go:223: -----------------------post-mortem-------------------------------- helpers_test.go:231: ======> post-mortem[TestNetworkPlugins/group/kindnet]: docker inspect <====== helpers_test.go:232: (dbg) Run: docker inspect kindnet-20220221084934-6550 helpers_test.go:236: (dbg) docker inspect kindnet-20220221084934-6550: -- stdout -- [ { "Id": "c1e6246a98754ece55098fded4ad0fdee8bac096c2ada346a21bcfdcf7c74ea8", "Created": "2022-02-21T09:02:53.536636017Z", "Path": "/usr/local/bin/entrypoint", "Args": [ "/sbin/init" ], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 423673, "ExitCode": 0, "Error": "", "StartedAt": "2022-02-21T09:02:53.928380162Z", "FinishedAt": "0001-01-01T00:00:00Z" }, "Image": "sha256:1312ccd2422d964b2df363d606d0c016d6acbc1ddf0211c26a74717f2897dc43", "ResolvConfPath": "/var/lib/docker/containers/c1e6246a98754ece55098fded4ad0fdee8bac096c2ada346a21bcfdcf7c74ea8/resolv.conf", "HostnamePath": "/var/lib/docker/containers/c1e6246a98754ece55098fded4ad0fdee8bac096c2ada346a21bcfdcf7c74ea8/hostname", "HostsPath": "/var/lib/docker/containers/c1e6246a98754ece55098fded4ad0fdee8bac096c2ada346a21bcfdcf7c74ea8/hosts", "LogPath": "/var/lib/docker/containers/c1e6246a98754ece55098fded4ad0fdee8bac096c2ada346a21bcfdcf7c74ea8/c1e6246a98754ece55098fded4ad0fdee8bac096c2ada346a21bcfdcf7c74ea8-json.log", "Name": "/kindnet-20220221084934-6550", "RestartCount": 0, "Driver": "overlay2", "Platform": "linux", "MountLabel": "", "ProcessLabel": "", "AppArmorProfile": "unconfined", "ExecIDs": null, "HostConfig": { "Binds": [ "/lib/modules:/lib/modules:ro", "kindnet-20220221084934-6550:/var" ], "ContainerIDFile": "", "LogConfig": { "Type": "json-file", "Config": {} }, "NetworkMode": "kindnet-20220221084934-6550", "PortBindings": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ] }, "RestartPolicy": { "Name": "no", "MaximumRetryCount": 0 }, "AutoRemove": false, "VolumeDriver": "", "VolumesFrom": null, "CapAdd": null, "CapDrop": null, "CgroupnsMode": "host", "Dns": [], "DnsOptions": [], "DnsSearch": [], "ExtraHosts": null, "GroupAdd": null, "IpcMode": "private", "Cgroup": "", "Links": null, "OomScoreAdj": 0, "PidMode": "", "Privileged": true, "PublishAllPorts": false, "ReadonlyRootfs": false, "SecurityOpt": [ "seccomp=unconfined", "apparmor=unconfined", "label=disable" ], "Tmpfs": { "/run": "", "/tmp": "" }, "UTSMode": "", "UsernsMode": "", "ShmSize": 67108864, "Runtime": "runc", "ConsoleSize": [ 0, 0 ], "Isolation": "", "CpuShares": 0, "Memory": 0, "NanoCpus": 2000000000, "CgroupParent": "", "BlkioWeight": 0, "BlkioWeightDevice": [], "BlkioDeviceReadBps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteIOps": null, "CpuPeriod": 0, "CpuQuota": 0, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": [ { "PathOnHost": "/dev/fuse", "PathInContainer": "/dev/fuse", "CgroupPermissions": "rwm" } ], "DeviceCgroupRules": null, "DeviceRequests": null, "KernelMemory": 0, "KernelMemoryTCP": 0, "MemoryReservation": 0, "MemorySwap": 0, "MemorySwappiness": null, "OomKillDisable": false, "PidsLimit": null, "Ulimits": null, "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0, "MaskedPaths": null, "ReadonlyPaths": null }, "GraphDriver": { "Data": { "LowerDir": "/var/lib/docker/overlay2/8cae0888aea74361910435acf7cb12e68553c58d082e4d1dd05a51358b965804-init/diff:/var/lib/docker/overlay2/a149cc5f42de5038ba3e5f21bd82a6356aa9456aec11a3690e569db72ca8eaf5/diff:/var/lib/docker/overlay2/fce4ef34d92ff253276822af091ec16578b924486e0e565c7f34b479bdbe1606/diff:/var/lib/docker/overlay2/a38e10ed288ad73a8ce637c6219d5b13318bb72210ce4fb6291dd57d238df611/diff:/var/lib/docker/overlay2/93c18ddce49a4eb2d344b0579d7f58df999d53bb329042f78a5b3f40296f83cd/diff:/var/lib/docker/overlay2/d9ee14262b31fff87924cf913d835b02182e6a8dc3783ecb5f67dbd29faa079c/diff:/var/lib/docker/overlay2/8a0b39fc5ebcb9761ad433102244050fab339d3e027a5a30a7e01d121fe362e4/diff:/var/lib/docker/overlay2/7a6b41deef37b6400fac0c94871f73040d4cae157de2e929b7a0bf96f0e353aa/diff:/var/lib/docker/overlay2/49d913ee587718efcf521a65de0109df5c2c42644f75690beed97a437d623b66/diff:/var/lib/docker/overlay2/89e8fa960313dfb6ece4007576bf5d89e2b0ea68a2bde9de5527647a7df6ff0f/diff:/var/lib/docker/overlay2/0d2632e6ee2f4198f3c7c0d37c31ee03d33864e0131ae85dd3001adecfae53db/diff:/var/lib/docker/overlay2/b28a3b2bbe2e231d00863aaaf6be28d2e5a76197f2be7934f18720a71f2435a8/diff:/var/lib/docker/overlay2/6f88e83ff5bb26bf909248d8df5eacf4d01f72523927fd7433a925c525fbd274/diff:/var/lib/docker/overlay2/c2547d18d2639fa842b0977952fa69cffd7b076e8db357ef0ec62fae9676619b/diff:/var/lib/docker/overlay2/34465ba37bb8eef87906c38cb891a738c54a16dbf47d27fef95cee96f0b62c4f/diff:/var/lib/docker/overlay2/50d3383518bb8e6a01e1cfcae621b8c2238dc54ae94d63606d145633045bc6cd/diff:/var/lib/docker/overlay2/83561f107aed2bf9cb840ff66aceb882d0e39bcced79ef27ce67c17e063ba536/diff:/var/lib/docker/overlay2/bfb7c0c91e0a7649286abe3a6b02df4585a931ffa3a6b1e4846aa1b98eb0d997/diff:/var/lib/docker/overlay2/80589044da151f3d3eed7824d236bc8a19697182abca8610eb34f1c3cae53bf0/diff:/var/lib/docker/overlay2/5a336e956e467ad50830aa4b9ed5b65cb3ef08e6d49a4900acd0440ca3c03c3a/diff:/var/lib/docker/overlay2/f7b3446c0dca80a2053183e360de8f99e3a31e87ad633fcc23f14b2045643596/diff:/var/lib/docker/overlay2/03688a16272f13f172ed8baa181df2a1702e08edbc9c5a4a767265adb2c00b79/diff:/var/lib/docker/overlay2/c5445afaabab58221dbffedde261b3f9948116ccba1344b6f3dd4fdfaf5c98cd/diff:/var/lib/docker/overlay2/704464fd20f23f93d0ad3c3c58103f01fc8a0d76d70bc191f718273e7412bb42/diff:/var/lib/docker/overlay2/a344f9ca1bf1b674ebf560c2a0498913ed90c836e3b4f5b1968022f7f7e54055/diff:/var/lib/docker/overlay2/929f053e6a327a7203e8d8c281f827d02ad922fdabbdfcc5453003daba88cdf4/diff:/var/lib/docker/overlay2/d6a160f0ffe94e5abf99d928a638f6a6220b24c7ba643a438c7557146d240f70/diff:/var/lib/docker/overlay2/c6b58e187a2df10fd55e19e6ad38c20f5ebf5e62617a59b6ce4afe0fc712c0f6/diff:/var/lib/docker/overlay2/fef43deeff878af42cc1f31a1f265b66e642aa166cec02eba5a0c82fb6d70872/diff:/var/lib/docker/overlay2/0d7bbf19182056a05efca89485df54da5fba9903000517fc9e7c2072aa7ba62e/diff:/var/lib/docker/overlay2/5840748964e9ce3890a2fa4fc2e3fb0635426f944b42b3128808a7aca816bf48/diff:/var/lib/docker/overlay2/a94126962e9f346960e3f570b96e0a6e987d255fcba30bf3fd76b227e66275c6/diff:/var/lib/docker/overlay2/211ba88e1557a352838b4896dff8efac70330c1424cff237613c4186994ec037/diff:/var/lib/docker/overlay2/c04720d23411d6693832990e2c79bc529ade609401eaeee8d159f5a6d74d986e/diff:/var/lib/docker/overlay2/70bfe328c89d3a3902a4ba591874fe936425b208ec8e39c2e25d60f21b212300/diff:/var/lib/docker/overlay2/7dcce8bec6dd9561387116c9f724b9101e94e15c0e19c7880ce2b4e96b2921a6/diff:/var/lib/docker/overlay2/81463813805e91be42d9f71b5c63823a80abaaa42d8638034eaddcd66c148310/diff:/var/lib/docker/overlay2/c7dbb1344cf7167b07d58d49902e0db9335bc92a71a324c15dadc14c0aa67c45/diff:/var/lib/docker/overlay2/909df0d6b92b92bcf6e376f0e33d120c53cd24ad0b997d31b5a84f3d1ab89769/diff:/var/lib/docker/overlay2/eacfeaee6ed4aa14e847adad2c6d03439c53944654cc2779cc5c80d239ab323c/diff:/var/lib/docker/overlay2/48c58492169f9043495b11b47439f4dcdeb168525e5bb3a8c130c1ddd5ec882d/diff:/var/lib/docker/overlay2/4520a69f91e766b7694473c2107cc0644394d65da900259eb5933779accdc86d/diff:/var/lib/docker/overlay2/68aafdd300c4b4ebf94976a352959e75fa170653a69e13da38650a447cd5cb2f/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394cfc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff", "MergedDir": "/var/lib/docker/overlay2/8cae0888aea74361910435acf7cb12e68553c58d082e4d1dd05a51358b965804/merged", "UpperDir": "/var/lib/docker/overlay2/8cae0888aea74361910435acf7cb12e68553c58d082e4d1dd05a51358b965804/diff", "WorkDir": "/var/lib/docker/overlay2/8cae0888aea74361910435acf7cb12e68553c58d082e4d1dd05a51358b965804/work" }, "Name": "overlay2" }, "Mounts": [ { "Type": "bind", "Source": "/lib/modules", "Destination": "/lib/modules", "Mode": "ro", "RW": false, "Propagation": "rprivate" }, { "Type": "volume", "Name": "kindnet-20220221084934-6550", "Source": "/var/lib/docker/volumes/kindnet-20220221084934-6550/_data", "Destination": "/var", "Driver": "local", "Mode": "z", "RW": true, "Propagation": "" } ], "Config": { "Hostname": "kindnet-20220221084934-6550", "Domainname": "", "User": "root", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "ExposedPorts": { "22/tcp": {}, "2376/tcp": {}, "32443/tcp": {}, "5000/tcp": {}, "8443/tcp": {} }, "Tty": true, "OpenStdin": false, "StdinOnce": false, "Env": [ "container=docker", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" ], "Cmd": null, "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2", "Volumes": null, "WorkingDir": "", "Entrypoint": [ "/usr/local/bin/entrypoint", "/sbin/init" ], "OnBuild": null, "Labels": { "created_by.minikube.sigs.k8s.io": "true", "mode.minikube.sigs.k8s.io": "kindnet-20220221084934-6550", "name.minikube.sigs.k8s.io": "kindnet-20220221084934-6550", "role.minikube.sigs.k8s.io": "" }, "StopSignal": "SIGRTMIN+3" }, "NetworkSettings": { "Bridge": "", "SandboxID": "64a33474aca43e4c210eb7d638d4895ff263c795f7e4d8f9cf9b27e15672955f", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49384" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49383" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49380" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49382" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49381" } ] }, "SandboxKey": "/var/run/docker/netns/64a33474aca4", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null, "EndpointID": "", "Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "MacAddress": "", "Networks": { "kindnet-20220221084934-6550": { "IPAMConfig": { "IPv4Address": "192.168.49.2" }, "Links": null, "Aliases": [ "c1e6246a9875", "kindnet-20220221084934-6550" ], "NetworkID": "5d96ab4d6b1ae076cca503cf53d5c36ffb8868b0be10b67aca009ffaf43ed991", "EndpointID": "48eee4fc9b8f861162fedf1f848e7419fd58c043f8784d407ccf05d104b0ad30", "Gateway": "192.168.49.1", "IPAddress": "192.168.49.2", "IPPrefixLen": 24, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:c0:a8:31:02", "DriverOpts": null } } } } ] -- /stdout -- helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p kindnet-20220221084934-6550 -n kindnet-20220221084934-6550 helpers_test.go:245: <<< TestNetworkPlugins/group/kindnet FAILED: start of post-mortem logs <<< helpers_test.go:246: ======> post-mortem[TestNetworkPlugins/group/kindnet]: minikube logs <====== helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 -p kindnet-20220221084934-6550 logs -n 25 helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p kindnet-20220221084934-6550 logs -n 25: (1.077134548s) helpers_test.go:253: TestNetworkPlugins/group/kindnet logs: -- stdout -- * * ==> Audit <== * |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| | logs | -p | stopped-upgrade-20220221085315-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:27 UTC | Mon, 21 Feb 2022 08:54:29 UTC | | | stopped-upgrade-20220221085315-6550 | | | | | | | delete | -p | stopped-upgrade-20220221085315-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:29 UTC | Mon, 21 Feb 2022 08:54:31 UTC | | | stopped-upgrade-20220221085315-6550 | | | | | | | start | -p | cert-expiration-20220221085105-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:40 UTC | Mon, 21 Feb 2022 08:54:44 UTC | | | cert-expiration-20220221085105-6550 | | | | | | | | --memory=2048 | | | | | | | | --cert-expiration=8760h | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | delete | -p | cert-expiration-20220221085105-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:54:44 UTC | Mon, 21 Feb 2022 08:54:47 UTC | | | cert-expiration-20220221085105-6550 | | | | | | | start | -p cilium-20220221084934-6550 | cilium-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:33 UTC | Mon, 21 Feb 2022 08:55:10 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=cilium --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p cilium-20220221084934-6550 | cilium-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:55:15 UTC | Mon, 21 Feb 2022 08:55:16 UTC | | | pgrep -a kubelet | | | | | | | delete | -p cilium-20220221084934-6550 | cilium-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:55:29 UTC | Mon, 21 Feb 2022 08:55:33 UTC | | start | -p false-20220221084934-6550 | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:55:33 UTC | Mon, 21 Feb 2022 08:56:15 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=false --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p false-20220221084934-6550 | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:56:15 UTC | Mon, 21 Feb 2022 08:56:16 UTC | | | pgrep -a kubelet | | | | | | | start | -p auto-20220221084933-6550 | auto-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:29 UTC | Mon, 21 Feb 2022 09:01:45 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p auto-20220221084933-6550 | auto-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:01:45 UTC | Mon, 21 Feb 2022 09:01:45 UTC | | | pgrep -a kubelet | | | | | | | -p | false-20220221084934-6550 logs | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:02:41 UTC | Mon, 21 Feb 2022 09:02:42 UTC | | | -n 25 | | | | | | | delete | -p false-20220221084934-6550 | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:02:43 UTC | Mon, 21 Feb 2022 09:02:46 UTC | | -p | custom-weave-20220221084934-6550 | custom-weave-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:27 UTC | Mon, 21 Feb 2022 09:03:28 UTC | | | logs -n 25 | | | | | | | delete | -p | custom-weave-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:29 UTC | Mon, 21 Feb 2022 09:03:32 UTC | | | custom-weave-20220221084934-6550 | | | | | | | start | -p kindnet-20220221084934-6550 | kindnet-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:02:46 UTC | Mon, 21 Feb 2022 09:03:35 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=kindnet --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p kindnet-20220221084934-6550 | kindnet-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:40 UTC | Mon, 21 Feb 2022 09:03:40 UTC | | | pgrep -a kubelet | | | | | | | -p | calico-20220221084934-6550 | calico-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:45 UTC | Mon, 21 Feb 2022 09:03:47 UTC | | | logs -n 25 | | | | | | | delete | -p calico-20220221084934-6550 | calico-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:48 UTC | Mon, 21 Feb 2022 09:03:50 UTC | | -p | auto-20220221084933-6550 logs | auto-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:07:20 UTC | Mon, 21 Feb 2022 09:07:22 UTC | | | -n 25 | | | | | | | delete | -p auto-20220221084933-6550 | auto-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:07:22 UTC | Mon, 21 Feb 2022 09:07:25 UTC | | start | -p | enable-default-cni-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:32 UTC | Mon, 21 Feb 2022 09:08:26 UTC | | | enable-default-cni-20220221084933-6550 | | | | | | | | --memory=2048 --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --enable-default-cni=true | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p | enable-default-cni-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:08:26 UTC | Mon, 21 Feb 2022 09:08:27 UTC | | | enable-default-cni-20220221084933-6550 | | | | | | | | pgrep -a kubelet | | | | | | | start | -p bridge-20220221084933-6550 | bridge-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:51 UTC | Mon, 21 Feb 2022 09:08:41 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=bridge --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p bridge-20220221084933-6550 | bridge-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:08:41 UTC | Mon, 21 Feb 2022 09:08:41 UTC | | | pgrep -a kubelet | | | | | | |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| * * ==> Last Start <== * Log file created at: 2022/02/21 09:07:25 Running on machine: ubuntu-20-agent-5 Binary: Built with gc go1.17.7 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0221 09:07:25.365204 462115 out.go:297] Setting OutFile to fd 1 ... I0221 09:07:25.365306 462115 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 09:07:25.365316 462115 out.go:310] Setting ErrFile to fd 2... I0221 09:07:25.365320 462115 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 09:07:25.365432 462115 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/bin I0221 09:07:25.365703 462115 out.go:304] Setting JSON to false I0221 09:07:25.367382 462115 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":3000,"bootTime":1645431446,"procs":605,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"} I0221 09:07:25.367473 462115 start.go:122] virtualization: kvm guest I0221 09:07:25.370626 462115 out.go:176] * [kubenet-20220221084933-6550] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64) I0221 09:07:25.372118 462115 out.go:176] - MINIKUBE_LOCATION=13641 I0221 09:07:25.370818 462115 notify.go:193] Checking for updates... I0221 09:07:25.373713 462115 out.go:176] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true I0221 09:07:25.375244 462115 out.go:176] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 09:07:25.376596 462115 out.go:176] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube I0221 09:07:25.378032 462115 out.go:176] - MINIKUBE_BIN=out/minikube-linux-amd64 I0221 09:07:25.378593 462115 config.go:176] Loaded profile config "bridge-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:07:25.378683 462115 config.go:176] Loaded profile config "enable-default-cni-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:07:25.378760 462115 config.go:176] Loaded profile config "kindnet-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:07:25.378816 462115 driver.go:344] Setting default libvirt URI to qemu:///system I0221 09:07:25.423110 462115 docker.go:132] docker version: linux-20.10.12 I0221 09:07:25.423225 462115 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 09:07:25.519741 462115 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-02-21 09:07:25.456330991 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 09:07:25.519904 462115 docker.go:237] overlay module found I0221 09:07:25.522315 462115 out.go:176] * Using the docker driver based on user configuration I0221 09:07:25.522340 462115 start.go:281] selected driver: docker I0221 09:07:25.522345 462115 start.go:798] validating driver "docker" against I0221 09:07:25.522361 462115 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} W0221 09:07:25.522420 462115 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. W0221 09:07:25.522438 462115 out.go:241] ! Your cgroup does not allow setting memory. I0221 09:07:25.524080 462115 out.go:176] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities I0221 09:07:25.524710 462115 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 09:07:25.619214 462115 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-02-21 09:07:25.556542364 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 09:07:25.619324 462115 start_flags.go:288] no existing cluster config was found, will generate one from the flags I0221 09:07:25.619470 462115 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m I0221 09:07:25.619492 462115 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] I0221 09:07:25.619507 462115 cni.go:89] network plugin configured as "kubenet", returning disabled I0221 09:07:25.619518 462115 start_flags.go:302] config: {Name:kubenet-20220221084933-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:kubenet-20220221084933-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 09:07:25.622014 462115 out.go:176] * Starting control plane node kubenet-20220221084933-6550 in cluster kubenet-20220221084933-6550 I0221 09:07:25.622065 462115 cache.go:120] Beginning downloading kic base image for docker with docker I0221 09:07:25.623707 462115 out.go:176] * Pulling base image ... I0221 09:07:25.623738 462115 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 09:07:25.623772 462115 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4 I0221 09:07:25.623791 462115 cache.go:57] Caching tarball of preloaded images I0221 09:07:25.623831 462115 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon I0221 09:07:25.624045 462115 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0221 09:07:25.624062 462115 cache.go:60] Finished verifying existence of preloaded tar for v1.23.4 on docker I0221 09:07:25.624170 462115 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/config.json ... I0221 09:07:25.624203 462115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/config.json: {Name:mk436cd9a3d44441ff51e526a3022ca41e7119cc Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:07:25.670154 462115 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon, skipping pull I0221 09:07:25.670180 462115 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 exists in daemon, skipping load I0221 09:07:25.670197 462115 cache.go:208] Successfully downloaded all kic artifacts I0221 09:07:25.670229 462115 start.go:313] acquiring machines lock for kubenet-20220221084933-6550: {Name:mkef701a995f5d6461266930b6bc546896915ade Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:07:25.670358 462115 start.go:317] acquired machines lock for "kubenet-20220221084933-6550" in 111.979µs I0221 09:07:25.670381 462115 start.go:89] Provisioning new machine with config: &{Name:kubenet-20220221084933-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:kubenet-20220221084933-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true} I0221 09:07:25.670463 462115 start.go:126] createHost starting for "" (driver="docker") I0221 09:07:22.956716 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:25.455908 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:22.936280 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:25.434799 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:25.672815 462115 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ... I0221 09:07:25.673042 462115 start.go:160] libmachine.API.Create for "kubenet-20220221084933-6550" (driver="docker") I0221 09:07:25.673070 462115 client.go:168] LocalClient.Create starting I0221 09:07:25.673128 462115 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem I0221 09:07:25.673157 462115 main.go:130] libmachine: Decoding PEM data... I0221 09:07:25.673177 462115 main.go:130] libmachine: Parsing certificate... I0221 09:07:25.673234 462115 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem I0221 09:07:25.673252 462115 main.go:130] libmachine: Decoding PEM data... I0221 09:07:25.673266 462115 main.go:130] libmachine: Parsing certificate... I0221 09:07:25.673584 462115 cli_runner.go:133] Run: docker network inspect kubenet-20220221084933-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0221 09:07:25.706536 462115 cli_runner.go:180] docker network inspect kubenet-20220221084933-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0221 09:07:25.706603 462115 network_create.go:254] running [docker network inspect kubenet-20220221084933-6550] to gather additional debugging logs... I0221 09:07:25.706621 462115 cli_runner.go:133] Run: docker network inspect kubenet-20220221084933-6550 W0221 09:07:25.739858 462115 cli_runner.go:180] docker network inspect kubenet-20220221084933-6550 returned with exit code 1 I0221 09:07:25.739894 462115 network_create.go:257] error running [docker network inspect kubenet-20220221084933-6550]: docker network inspect kubenet-20220221084933-6550: exit status 1 stdout: [] stderr: Error: No such network: kubenet-20220221084933-6550 I0221 09:07:25.739908 462115 network_create.go:259] output of [docker network inspect kubenet-20220221084933-6550]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: kubenet-20220221084933-6550 ** /stderr ** I0221 09:07:25.739962 462115 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 09:07:25.774491 462115 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-5d96ab4d6b1a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:58:0b:cb:43}} I0221 09:07:25.775193 462115 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-3436ceea5013 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:ca:78:ad:42}} I0221 09:07:25.775878 462115 network.go:240] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-0c80bded97cf IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:ac:76:f1:e1}} I0221 09:07:25.776653 462115 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.76.0:0xc0006540f8] misses:0} I0221 09:07:25.776701 462115 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}} I0221 09:07:25.776716 462115 network_create.go:106] attempt to create docker network kubenet-20220221084933-6550 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ... I0221 09:07:25.776774 462115 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220221084933-6550 I0221 09:07:25.846685 462115 network_create.go:90] docker network kubenet-20220221084933-6550 192.168.76.0/24 created I0221 09:07:25.846730 462115 kic.go:106] calculated static IP "192.168.76.2" for the "kubenet-20220221084933-6550" container I0221 09:07:25.846789 462115 cli_runner.go:133] Run: docker ps -a --format {{.Names}} I0221 09:07:25.881096 462115 cli_runner.go:133] Run: docker volume create kubenet-20220221084933-6550 --label name.minikube.sigs.k8s.io=kubenet-20220221084933-6550 --label created_by.minikube.sigs.k8s.io=true I0221 09:07:25.915063 462115 oci.go:102] Successfully created a docker volume kubenet-20220221084933-6550 I0221 09:07:25.915146 462115 cli_runner.go:133] Run: docker run --rm --name kubenet-20220221084933-6550-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-20220221084933-6550 --entrypoint /usr/bin/test -v kubenet-20220221084933-6550:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -d /var/lib I0221 09:07:26.487335 462115 oci.go:106] Successfully prepared a docker volume kubenet-20220221084933-6550 I0221 09:07:26.487375 462115 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 09:07:26.487392 462115 kic.go:179] Starting extracting preloaded images to volume ... I0221 09:07:26.487455 462115 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-20220221084933-6550:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir I0221 09:07:27.456002 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:29.956149 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:27.935384 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:29.935450 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:34.973821 462115 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-20220221084933-6550:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir: (8.486319336s) I0221 09:07:34.973862 462115 kic.go:188] duration metric: took 8.486468 seconds to extract preloaded images to volume W0221 09:07:34.973896 462115 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted. W0221 09:07:34.973905 462115 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. I0221 09:07:34.973954 462115 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'" I0221 09:07:35.070268 462115 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubenet-20220221084933-6550 --name kubenet-20220221084933-6550 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-20220221084933-6550 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubenet-20220221084933-6550 --network kubenet-20220221084933-6550 --ip 192.168.76.2 --volume kubenet-20220221084933-6550:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 I0221 09:07:32.455703 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:34.456216 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:32.435168 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:34.936085 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:35.496307 462115 cli_runner.go:133] Run: docker container inspect kubenet-20220221084933-6550 --format={{.State.Running}} I0221 09:07:35.534760 462115 cli_runner.go:133] Run: docker container inspect kubenet-20220221084933-6550 --format={{.State.Status}} I0221 09:07:35.570397 462115 cli_runner.go:133] Run: docker exec kubenet-20220221084933-6550 stat /var/lib/dpkg/alternatives/iptables I0221 09:07:35.639096 462115 oci.go:281] the created container "kubenet-20220221084933-6550" has a running status. I0221 09:07:35.639132 462115 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/kubenet-20220221084933-6550/id_rsa... I0221 09:07:35.832919 462115 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/kubenet-20220221084933-6550/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0221 09:07:35.920473 462115 cli_runner.go:133] Run: docker container inspect kubenet-20220221084933-6550 --format={{.State.Status}} I0221 09:07:35.961161 462115 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0221 09:07:35.961187 462115 kic_runner.go:114] Args: [docker exec --privileged kubenet-20220221084933-6550 chown docker:docker /home/docker/.ssh/authorized_keys] I0221 09:07:36.057451 462115 cli_runner.go:133] Run: docker container inspect kubenet-20220221084933-6550 --format={{.State.Status}} I0221 09:07:36.093451 462115 machine.go:88] provisioning docker machine ... I0221 09:07:36.093496 462115 ubuntu.go:169] provisioning hostname "kubenet-20220221084933-6550" I0221 09:07:36.093551 462115 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220221084933-6550 I0221 09:07:36.131078 462115 main.go:130] libmachine: Using SSH client type: native I0221 09:07:36.131315 462115 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49399 } I0221 09:07:36.131393 462115 main.go:130] libmachine: About to run SSH command: sudo hostname kubenet-20220221084933-6550 && echo "kubenet-20220221084933-6550" | sudo tee /etc/hostname I0221 09:07:36.264249 462115 main.go:130] libmachine: SSH cmd err, output: : kubenet-20220221084933-6550 I0221 09:07:36.264345 462115 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220221084933-6550 I0221 09:07:36.298302 462115 main.go:130] libmachine: Using SSH client type: native I0221 09:07:36.298505 462115 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49399 } I0221 09:07:36.298538 462115 main.go:130] libmachine: About to run SSH command: if ! grep -xq '.*\skubenet-20220221084933-6550' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubenet-20220221084933-6550/g' /etc/hosts; else echo '127.0.1.1 kubenet-20220221084933-6550' | sudo tee -a /etc/hosts; fi fi I0221 09:07:36.418944 462115 main.go:130] libmachine: SSH cmd err, output: : I0221 09:07:36.418972 462115 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube} I0221 09:07:36.419031 462115 ubuntu.go:177] setting up certificates I0221 09:07:36.419042 462115 provision.go:83] configureAuth start I0221 09:07:36.419102 462115 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-20220221084933-6550 I0221 09:07:36.453836 462115 provision.go:138] copyHostCerts I0221 09:07:36.453901 462115 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem, removing ... I0221 09:07:36.453915 462115 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem I0221 09:07:36.454002 462115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem (1082 bytes) I0221 09:07:36.454118 462115 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem, removing ... I0221 09:07:36.454134 462115 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem I0221 09:07:36.454166 462115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem (1123 bytes) I0221 09:07:36.454258 462115 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem, removing ... I0221 09:07:36.454273 462115 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem I0221 09:07:36.454297 462115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem (1675 bytes) I0221 09:07:36.454356 462115 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem org=jenkins.kubenet-20220221084933-6550 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubenet-20220221084933-6550] I0221 09:07:36.554325 462115 provision.go:172] copyRemoteCerts I0221 09:07:36.554377 462115 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0221 09:07:36.554408 462115 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220221084933-6550 I0221 09:07:36.590327 462115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49399 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/kubenet-20220221084933-6550/id_rsa Username:docker} I0221 09:07:36.678785 462115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes) I0221 09:07:36.697396 462115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes) I0221 09:07:36.716087 462115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0221 09:07:36.735165 462115 provision.go:86] duration metric: configureAuth took 316.110066ms I0221 09:07:36.735197 462115 ubuntu.go:193] setting minikube options for container-runtime I0221 09:07:36.735391 462115 config.go:176] Loaded profile config "kubenet-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:07:36.735436 462115 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220221084933-6550 I0221 09:07:36.771473 462115 main.go:130] libmachine: Using SSH client type: native I0221 09:07:36.771605 462115 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49399 } I0221 09:07:36.771620 462115 main.go:130] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0221 09:07:36.895259 462115 main.go:130] libmachine: SSH cmd err, output: : overlay I0221 09:07:36.895289 462115 ubuntu.go:71] root file system type: overlay I0221 09:07:36.895428 462115 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0221 09:07:36.895486 462115 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220221084933-6550 I0221 09:07:36.929080 462115 main.go:130] libmachine: Using SSH client type: native I0221 09:07:36.929241 462115 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49399 } I0221 09:07:36.929337 462115 main.go:130] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0221 09:07:37.060341 462115 main.go:130] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0221 09:07:37.060410 462115 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220221084933-6550 I0221 09:07:37.094195 462115 main.go:130] libmachine: Using SSH client type: native I0221 09:07:37.094386 462115 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49399 } I0221 09:07:37.094408 462115 main.go:130] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0221 09:07:37.752179 462115 main.go:130] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2021-12-13 11:43:42.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2022-02-21 09:07:37.057067068 +0000 @@ -1,30 +1,32 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com +BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target -Requires=docker.socket containerd.service +Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always - -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 +Restart=on-failure -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +34,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0221 09:07:37.752225 462115 machine.go:91] provisioned docker machine in 1.658745148s I0221 09:07:37.752235 462115 client.go:171] LocalClient.Create took 12.079160402s I0221 09:07:37.752251 462115 start.go:168] duration metric: libmachine.API.Create for "kubenet-20220221084933-6550" took 12.079208916s I0221 09:07:37.752259 462115 start.go:267] post-start starting for "kubenet-20220221084933-6550" (driver="docker") I0221 09:07:37.752273 462115 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0221 09:07:37.752330 462115 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0221 09:07:37.752382 462115 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220221084933-6550 I0221 09:07:37.787058 462115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49399 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/kubenet-20220221084933-6550/id_rsa Username:docker} I0221 09:07:37.878859 462115 ssh_runner.go:195] Run: cat /etc/os-release I0221 09:07:37.881740 462115 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0221 09:07:37.881781 462115 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0221 09:07:37.881789 462115 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0221 09:07:37.881794 462115 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0221 09:07:37.881802 462115 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/addons for local assets ... I0221 09:07:37.881849 462115 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files for local assets ... I0221 09:07:37.881912 462115 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem -> 65502.pem in /etc/ssl/certs I0221 09:07:37.881993 462115 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs I0221 09:07:37.889062 462115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /etc/ssl/certs/65502.pem (1708 bytes) I0221 09:07:37.908245 462115 start.go:270] post-start completed in 155.964278ms I0221 09:07:37.908729 462115 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-20220221084933-6550 I0221 09:07:37.943174 462115 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/config.json ... I0221 09:07:37.943457 462115 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0221 09:07:37.943523 462115 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220221084933-6550 I0221 09:07:37.978137 462115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49399 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/kubenet-20220221084933-6550/id_rsa Username:docker} I0221 09:07:38.063776 462115 start.go:129] duration metric: createHost completed in 12.393300397s I0221 09:07:38.063805 462115 start.go:80] releasing machines lock for "kubenet-20220221084933-6550", held for 12.393436394s I0221 09:07:38.063890 462115 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-20220221084933-6550 I0221 09:07:38.097050 462115 ssh_runner.go:195] Run: systemctl --version I0221 09:07:38.097080 462115 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/ I0221 09:07:38.097111 462115 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220221084933-6550 I0221 09:07:38.097154 462115 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220221084933-6550 I0221 09:07:38.135816 462115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49399 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/kubenet-20220221084933-6550/id_rsa Username:docker} I0221 09:07:38.136347 462115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49399 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/kubenet-20220221084933-6550/id_rsa Username:docker} I0221 09:07:38.363372 462115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0221 09:07:38.373406 462115 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 09:07:38.382853 462115 cruntime.go:272] skipping containerd shutdown because we are bound to it I0221 09:07:38.382907 462115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0221 09:07:38.392391 462115 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0221 09:07:38.405004 462115 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0221 09:07:38.482371 462115 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0221 09:07:38.560931 462115 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 09:07:38.571438 462115 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0221 09:07:38.652075 462115 ssh_runner.go:195] Run: sudo systemctl start docker I0221 09:07:38.661857 462115 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 09:07:38.700829 462115 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 09:07:38.745158 462115 out.go:203] * Preparing Kubernetes v1.23.4 on Docker 20.10.12 ... I0221 09:07:38.745227 462115 cli_runner.go:133] Run: docker network inspect kubenet-20220221084933-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 09:07:38.778053 462115 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts I0221 09:07:38.781345 462115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 09:07:38.792680 462115 out.go:176] - kubelet.housekeeping-interval=5m I0221 09:07:38.792752 462115 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 09:07:38.792829 462115 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0221 09:07:38.825837 462115 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.4 k8s.gcr.io/kube-proxy:v1.23.4 k8s.gcr.io/kube-controller-manager:v1.23.4 k8s.gcr.io/kube-scheduler:v1.23.4 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0221 09:07:38.825858 462115 docker.go:537] Images already preloaded, skipping extraction I0221 09:07:38.825905 462115 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0221 09:07:38.858580 462115 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.4 k8s.gcr.io/kube-proxy:v1.23.4 k8s.gcr.io/kube-controller-manager:v1.23.4 k8s.gcr.io/kube-scheduler:v1.23.4 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0221 09:07:38.858604 462115 cache_images.go:84] Images are preloaded, skipping loading I0221 09:07:38.858644 462115 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0221 09:07:38.945322 462115 cni.go:89] network plugin configured as "kubenet", returning disabled I0221 09:07:38.945346 462115 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0221 09:07:38.945363 462115 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.23.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubenet-20220221084933-6550 NodeName:kubenet-20220221084933-6550 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0221 09:07:38.945517 462115 kubeadm.go:162] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.76.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "kubenet-20220221084933-6550" kubeletExtraArgs: node-ip: 192.168.76.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.76.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.23.4 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0221 09:07:38.945595 462115 kubeadm.go:936] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.23.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubenet-20220221084933-6550 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=kubenet --node-ip=192.168.76.2 --pod-cidr=10.244.0.0/16 [Install] config: {KubernetesVersion:v1.23.4 ClusterName:kubenet-20220221084933-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0221 09:07:38.945645 462115 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.4 I0221 09:07:38.953183 462115 binaries.go:44] Found k8s binaries, skipping transfer I0221 09:07:38.953251 462115 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0221 09:07:38.960848 462115 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes) I0221 09:07:38.974225 462115 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0221 09:07:38.987445 462115 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2049 bytes) I0221 09:07:39.000608 462115 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts I0221 09:07:39.003705 462115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 09:07:39.013379 462115 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550 for IP: 192.168.76.2 I0221 09:07:39.013475 462115 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key I0221 09:07:39.013519 462115 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key I0221 09:07:39.013564 462115 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.key I0221 09:07:39.013579 462115 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt with IP's: [] I0221 09:07:39.346317 462115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt ... I0221 09:07:39.346346 462115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: {Name:mkb4325f5289a5f6ad4c171aa035b58192e1b4ea Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:07:39.346548 462115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.key ... I0221 09:07:39.346562 462115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.key: {Name:mkac2cbd9f4db250a8ffc020a7da89dce1a50dbe Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:07:39.346647 462115 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/apiserver.key.31bdca25 I0221 09:07:39.346664 462115 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1] I0221 09:07:39.436480 462115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/apiserver.crt.31bdca25 ... I0221 09:07:39.436514 462115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/apiserver.crt.31bdca25: {Name:mkdad2ce4bce31598ddfebae3d7e9100b4287fc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:07:39.436706 462115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/apiserver.key.31bdca25 ... I0221 09:07:39.436721 462115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/apiserver.key.31bdca25: {Name:mkbccf5fe005f13ded3079d17293c9590de20164 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:07:39.436793 462115 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/apiserver.crt I0221 09:07:39.436851 462115 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/apiserver.key I0221 09:07:39.436893 462115 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/proxy-client.key I0221 09:07:39.436902 462115 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/proxy-client.crt with IP's: [] I0221 09:07:39.558306 462115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/proxy-client.crt ... I0221 09:07:39.558337 462115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/proxy-client.crt: {Name:mk451b4345ee41aef79f4374e4a11d13e02c5188 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:07:39.558521 462115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/proxy-client.key ... I0221 09:07:39.558536 462115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/proxy-client.key: {Name:mkc5644b090fa5bea7b910426f0cabeec97f042e Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:07:39.558703 462115 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem (1338 bytes) W0221 09:07:39.558744 462115 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550_empty.pem, impossibly tiny 0 bytes I0221 09:07:39.558757 462115 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem (1679 bytes) I0221 09:07:39.558779 462115 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem (1082 bytes) I0221 09:07:39.558802 462115 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem (1123 bytes) I0221 09:07:39.558824 462115 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem (1675 bytes) I0221 09:07:39.558865 462115 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem (1708 bytes) I0221 09:07:39.559723 462115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0221 09:07:39.580056 462115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0221 09:07:39.598243 462115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0221 09:07:39.616413 462115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I0221 09:07:39.634932 462115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0221 09:07:39.652842 462115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0221 09:07:39.671073 462115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0221 09:07:39.688626 462115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0221 09:07:39.706116 462115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem --> /usr/share/ca-certificates/6550.pem (1338 bytes) I0221 09:07:39.724730 462115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /usr/share/ca-certificates/65502.pem (1708 bytes) I0221 09:07:39.743321 462115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0221 09:07:39.761318 462115 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0221 09:07:39.774674 462115 ssh_runner.go:195] Run: openssl version I0221 09:07:39.779702 462115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6550.pem && ln -fs /usr/share/ca-certificates/6550.pem /etc/ssl/certs/6550.pem" I0221 09:07:39.787119 462115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6550.pem I0221 09:07:39.790153 462115 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Feb 21 08:30 /usr/share/ca-certificates/6550.pem I0221 09:07:39.790202 462115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6550.pem I0221 09:07:39.794986 462115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6550.pem /etc/ssl/certs/51391683.0" I0221 09:07:39.802359 462115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65502.pem && ln -fs /usr/share/ca-certificates/65502.pem /etc/ssl/certs/65502.pem" I0221 09:07:39.810020 462115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65502.pem I0221 09:07:39.813156 462115 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Feb 21 08:30 /usr/share/ca-certificates/65502.pem I0221 09:07:39.813199 462115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65502.pem I0221 09:07:39.818127 462115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65502.pem /etc/ssl/certs/3ec20f2e.0" I0221 09:07:39.825636 462115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0221 09:07:39.833153 462115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0221 09:07:39.836239 462115 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb 21 08:26 /usr/share/ca-certificates/minikubeCA.pem I0221 09:07:39.836288 462115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0221 09:07:39.841274 462115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0221 09:07:39.848658 462115 kubeadm.go:391] StartCluster: {Name:kubenet-20220221084933-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:kubenet-20220221084933-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 09:07:39.848790 462115 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0221 09:07:39.880196 462115 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0221 09:07:39.887505 462115 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0221 09:07:39.894616 462115 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver I0221 09:07:39.894671 462115 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0221 09:07:39.901631 462115 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0221 09:07:39.901669 462115 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0221 09:07:36.955571 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:38.956296 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:37.435269 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:39.934868 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:41.935305 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:40.422323 462115 out.go:203] - Generating certificates and keys ... I0221 09:07:43.156638 462115 out.go:203] - Booting up control plane ... I0221 09:07:41.455426 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:43.456495 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:45.955475 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:44.435651 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:46.934971 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:47.955535 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:49.956466 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:48.935532 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:50.935805 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:50.701742 462115 out.go:203] - Configuring RBAC rules ... I0221 09:07:51.116058 462115 cni.go:89] network plugin configured as "kubenet", returning disabled I0221 09:07:51.116111 462115 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0221 09:07:51.116187 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:07:51.116187 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=9a4c39b14752448ee8b5da7f8bb397c2a16d9ea9 minikube.k8s.io/name=kubenet-20220221084933-6550 minikube.k8s.io/updated_at=2022_02_21T09_07_51_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:07:51.609926 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:07:51.609927 462115 ops.go:34] apiserver oom_adj: -16 I0221 09:07:52.169159 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:07:52.668865 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:07:53.169400 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:07:53.668642 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:07:54.168995 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:07:54.668863 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:07:55.169566 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:07:51.956502 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:54.455635 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:53.435434 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:55.934804 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:55.669358 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:07:56.168935 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:07:56.668724 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:07:57.168562 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:07:57.669164 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:07:58.169111 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:07:58.669058 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:07:59.169144 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:07:59.669192 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:08:00.168848 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:07:56.456157 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:58.955830 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:08:00.956100 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:07:57.935055 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:07:59.935752 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:08:00.669438 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:08:01.168952 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:08:01.669422 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:08:02.168600 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:08:02.668524 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:08:03.169224 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:08:03.669191 462115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:08:03.726860 462115 kubeadm.go:1020] duration metric: took 12.610725464s to wait for elevateKubeSystemPrivileges. I0221 09:08:03.726892 462115 kubeadm.go:393] StartCluster complete in 23.878240603s I0221 09:08:03.726910 462115 settings.go:142] acquiring lock: {Name:mk4400923ef35d7d80e21aa000bc7683aef0fb09 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:08:03.727040 462115 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 09:08:03.729324 462115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig: {Name:mkbaf7b3f4ffbe8a9d57707d423380523fa909e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:08:04.249961 462115 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kubenet-20220221084933-6550" rescaled to 1 I0221 09:08:04.250026 462115 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true} I0221 09:08:04.251879 462115 out.go:176] * Verifying Kubernetes components... I0221 09:08:04.250133 462115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0221 09:08:04.250165 462115 addons.go:415] enableAddons start: toEnable=map[], additional=[] I0221 09:08:04.250295 462115 config.go:176] Loaded profile config "kubenet-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:08:04.252003 462115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0221 09:08:04.252112 462115 addons.go:65] Setting storage-provisioner=true in profile "kubenet-20220221084933-6550" I0221 09:08:04.252218 462115 addons.go:153] Setting addon storage-provisioner=true in "kubenet-20220221084933-6550" W0221 09:08:04.252235 462115 addons.go:165] addon storage-provisioner should already be in state true I0221 09:08:04.252123 462115 addons.go:65] Setting default-storageclass=true in profile "kubenet-20220221084933-6550" I0221 09:08:04.252284 462115 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubenet-20220221084933-6550" I0221 09:08:04.253268 462115 cli_runner.go:133] Run: docker container inspect kubenet-20220221084933-6550 --format={{.State.Status}} I0221 09:08:04.255154 462115 host.go:66] Checking if "kubenet-20220221084933-6550" exists ... I0221 09:08:04.257267 462115 cli_runner.go:133] Run: docker container inspect kubenet-20220221084933-6550 --format={{.State.Status}} I0221 09:08:04.299560 462115 out.go:176] - Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0221 09:08:04.299670 462115 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml I0221 09:08:04.299690 462115 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0221 09:08:04.299737 462115 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220221084933-6550 I0221 09:08:04.305379 462115 addons.go:153] Setting addon default-storageclass=true in "kubenet-20220221084933-6550" W0221 09:08:04.305411 462115 addons.go:165] addon default-storageclass should already be in state true I0221 09:08:04.305440 462115 host.go:66] Checking if "kubenet-20220221084933-6550" exists ... I0221 09:08:04.305926 462115 cli_runner.go:133] Run: docker container inspect kubenet-20220221084933-6550 --format={{.State.Status}} I0221 09:08:04.346696 462115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49399 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/kubenet-20220221084933-6550/id_rsa Username:docker} I0221 09:08:04.356870 462115 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml I0221 09:08:04.356904 462115 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0221 09:08:04.356958 462115 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220221084933-6550 I0221 09:08:04.390212 462115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49399 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/kubenet-20220221084933-6550/id_rsa Username:docker} I0221 09:08:04.434817 462115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.76.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0221 09:08:04.437465 462115 node_ready.go:35] waiting up to 5m0s for node "kubenet-20220221084933-6550" to be "Ready" ... I0221 09:08:04.443494 462115 node_ready.go:49] node "kubenet-20220221084933-6550" has status "Ready":"True" I0221 09:08:04.443524 462115 node_ready.go:38] duration metric: took 6.03247ms waiting for node "kubenet-20220221084933-6550" to be "Ready" ... I0221 09:08:04.443536 462115 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 09:08:04.512531 462115 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-cx6k8" in "kube-system" namespace to be "Ready" ... I0221 09:08:04.531502 462115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0221 09:08:04.624839 462115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0221 09:08:02.957327 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:08:05.456300 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:08:05.925268 462115 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.76.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.490406551s) I0221 09:08:05.925309 462115 start.go:777] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS I0221 09:08:06.036250 462115 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.41137268s) I0221 09:08:06.036367 462115 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.504836543s) I0221 09:08:02.434827 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:08:04.436572 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:08:06.935506 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:08:06.038177 462115 out.go:176] * Enabled addons: default-storageclass, storage-provisioner I0221 09:08:06.038202 462115 addons.go:417] enableAddons completed in 1.78804558s I0221 09:08:06.534275 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:09.032356 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:07.955229 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:08:09.956217 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:08:09.434727 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:08:11.435464 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:08:11.531561 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:13.532264 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:12.455629 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:08:14.455947 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:08:13.437319 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:08:15.934478 442801 pod_ready.go:102] pod "coredns-64897985d-mr75l" in "kube-system" namespace has status "Ready":"False" I0221 09:08:15.939175 442801 pod_ready.go:81] duration metric: took 4m0.019101325s waiting for pod "coredns-64897985d-mr75l" in "kube-system" namespace to be "Ready" ... E0221 09:08:15.939200 442801 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition I0221 09:08:15.939209 442801 pod_ready.go:78] waiting up to 5m0s for pod "etcd-enable-default-cni-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:08:15.943146 442801 pod_ready.go:92] pod "etcd-enable-default-cni-20220221084933-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:08:15.943167 442801 pod_ready.go:81] duration metric: took 3.9518ms waiting for pod "etcd-enable-default-cni-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:08:15.943176 442801 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-enable-default-cni-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:08:15.946802 442801 pod_ready.go:92] pod "kube-apiserver-enable-default-cni-20220221084933-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:08:15.946818 442801 pod_ready.go:81] duration metric: took 3.636488ms waiting for pod "kube-apiserver-enable-default-cni-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:08:15.946827 442801 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-enable-default-cni-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:08:15.950426 442801 pod_ready.go:92] pod "kube-controller-manager-enable-default-cni-20220221084933-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:08:15.950443 442801 pod_ready.go:81] duration metric: took 3.610411ms waiting for pod "kube-controller-manager-enable-default-cni-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:08:15.950451 442801 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-z67wt" in "kube-system" namespace to be "Ready" ... I0221 09:08:16.332833 442801 pod_ready.go:92] pod "kube-proxy-z67wt" in "kube-system" namespace has status "Ready":"True" I0221 09:08:16.332859 442801 pod_ready.go:81] duration metric: took 382.401522ms waiting for pod "kube-proxy-z67wt" in "kube-system" namespace to be "Ready" ... I0221 09:08:16.332869 442801 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-enable-default-cni-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:08:16.733188 442801 pod_ready.go:92] pod "kube-scheduler-enable-default-cni-20220221084933-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:08:16.733214 442801 pod_ready.go:81] duration metric: took 400.337647ms waiting for pod "kube-scheduler-enable-default-cni-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:08:16.733225 442801 pod_ready.go:38] duration metric: took 4m1.872423421s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 09:08:16.733251 442801 api_server.go:51] waiting for apiserver process to appear ... I0221 09:08:16.733309 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0221 09:08:16.768380 442801 logs.go:274] 1 containers: [22f36e8efd01] I0221 09:08:16.768445 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0221 09:08:16.800855 442801 logs.go:274] 1 containers: [2d52356b4d44] I0221 09:08:16.800921 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0221 09:08:16.837626 442801 logs.go:274] 1 containers: [3eab59e55df1] I0221 09:08:16.837689 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0221 09:08:16.873309 442801 logs.go:274] 1 containers: [6e0b11913ead] I0221 09:08:16.873374 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0221 09:08:16.906474 442801 logs.go:274] 1 containers: [b198c3fa1558] I0221 09:08:16.906554 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0221 09:08:16.939879 442801 logs.go:274] 0 containers: [] W0221 09:08:16.939899 442801 logs.go:276] No container was found matching "kubernetes-dashboard" I0221 09:08:16.939937 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0221 09:08:16.973485 442801 logs.go:274] 1 containers: [987fc4d25f59] I0221 09:08:16.973566 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0221 09:08:17.006616 442801 logs.go:274] 1 containers: [9da67fbcae63] I0221 09:08:17.006648 442801 logs.go:123] Gathering logs for kube-controller-manager [9da67fbcae63] ... I0221 09:08:17.006657 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9da67fbcae63" I0221 09:08:17.053540 442801 logs.go:123] Gathering logs for Docker ... I0221 09:08:17.053572 442801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0221 09:08:15.532346 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:17.532788 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:20.032086 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:16.456884 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:08:18.955296 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:08:20.957135 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:08:17.071249 442801 logs.go:123] Gathering logs for kubelet ... I0221 09:08:17.071285 442801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0221 09:08:17.132024 442801 logs.go:123] Gathering logs for describe nodes ... I0221 09:08:17.132066 442801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0221 09:08:17.209722 442801 logs.go:123] Gathering logs for kube-apiserver [22f36e8efd01] ... I0221 09:08:17.209756 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22f36e8efd01" I0221 09:08:17.251249 442801 logs.go:123] Gathering logs for kube-scheduler [6e0b11913ead] ... I0221 09:08:17.251280 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0b11913ead" I0221 09:08:17.292981 442801 logs.go:123] Gathering logs for storage-provisioner [987fc4d25f59] ... I0221 09:08:17.293018 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 987fc4d25f59" I0221 09:08:17.329067 442801 logs.go:123] Gathering logs for dmesg ... I0221 09:08:17.329100 442801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0221 09:08:17.358556 442801 logs.go:123] Gathering logs for etcd [2d52356b4d44] ... I0221 09:08:17.358591 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d52356b4d44" I0221 09:08:17.426854 442801 logs.go:123] Gathering logs for coredns [3eab59e55df1] ... I0221 09:08:17.426899 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3eab59e55df1" I0221 09:08:17.464666 442801 logs.go:123] Gathering logs for kube-proxy [b198c3fa1558] ... I0221 09:08:17.464693 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b198c3fa1558" I0221 09:08:17.501900 442801 logs.go:123] Gathering logs for container status ... I0221 09:08:17.501927 442801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0221 09:08:20.035109 442801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0221 09:08:20.056012 442801 api_server.go:71] duration metric: took 4m5.308939265s to wait for apiserver process to appear ... I0221 09:08:20.056038 442801 api_server.go:87] waiting for apiserver healthz status ... I0221 09:08:20.056088 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0221 09:08:20.088468 442801 logs.go:274] 1 containers: [22f36e8efd01] I0221 09:08:20.088542 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0221 09:08:20.122210 442801 logs.go:274] 1 containers: [2d52356b4d44] I0221 09:08:20.122296 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0221 09:08:20.158381 442801 logs.go:274] 1 containers: [3eab59e55df1] I0221 09:08:20.158463 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0221 09:08:20.196267 442801 logs.go:274] 1 containers: [6e0b11913ead] I0221 09:08:20.196344 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0221 09:08:20.233791 442801 logs.go:274] 1 containers: [b198c3fa1558] I0221 09:08:20.233865 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0221 09:08:20.284366 442801 logs.go:274] 0 containers: [] W0221 09:08:20.284395 442801 logs.go:276] No container was found matching "kubernetes-dashboard" I0221 09:08:20.284446 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0221 09:08:20.317999 442801 logs.go:274] 1 containers: [987fc4d25f59] I0221 09:08:20.318069 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0221 09:08:20.350848 442801 logs.go:274] 1 containers: [9da67fbcae63] I0221 09:08:20.350881 442801 logs.go:123] Gathering logs for kube-controller-manager [9da67fbcae63] ... I0221 09:08:20.350897 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9da67fbcae63" I0221 09:08:20.397231 442801 logs.go:123] Gathering logs for describe nodes ... I0221 09:08:20.397265 442801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0221 09:08:20.478264 442801 logs.go:123] Gathering logs for kube-apiserver [22f36e8efd01] ... I0221 09:08:20.478295 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22f36e8efd01" I0221 09:08:20.519692 442801 logs.go:123] Gathering logs for kube-scheduler [6e0b11913ead] ... I0221 09:08:20.519731 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0b11913ead" I0221 09:08:20.562951 442801 logs.go:123] Gathering logs for kube-proxy [b198c3fa1558] ... I0221 09:08:20.562980 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b198c3fa1558" I0221 09:08:20.598320 442801 logs.go:123] Gathering logs for storage-provisioner [987fc4d25f59] ... I0221 09:08:20.598355 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 987fc4d25f59" I0221 09:08:20.634456 442801 logs.go:123] Gathering logs for container status ... I0221 09:08:20.634484 442801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0221 09:08:20.665048 442801 logs.go:123] Gathering logs for kubelet ... I0221 09:08:20.665075 442801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0221 09:08:20.725872 442801 logs.go:123] Gathering logs for dmesg ... I0221 09:08:20.725912 442801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0221 09:08:20.756158 442801 logs.go:123] Gathering logs for etcd [2d52356b4d44] ... I0221 09:08:20.756192 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d52356b4d44" I0221 09:08:20.826595 442801 logs.go:123] Gathering logs for coredns [3eab59e55df1] ... I0221 09:08:20.826630 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3eab59e55df1" I0221 09:08:20.863311 442801 logs.go:123] Gathering logs for Docker ... I0221 09:08:20.863341 442801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0221 09:08:22.032139 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:24.033164 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:23.456009 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:08:25.456137 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:08:23.380697 442801 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ... I0221 09:08:23.386477 442801 api_server.go:266] https://192.168.58.2:8443/healthz returned 200: ok I0221 09:08:23.387402 442801 api_server.go:140] control plane version: v1.23.4 I0221 09:08:23.387422 442801 api_server.go:130] duration metric: took 3.331378972s to wait for apiserver health ... I0221 09:08:23.387430 442801 system_pods.go:43] waiting for kube-system pods to appear ... I0221 09:08:23.387474 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0221 09:08:23.421047 442801 logs.go:274] 1 containers: [22f36e8efd01] I0221 09:08:23.421115 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0221 09:08:23.454732 442801 logs.go:274] 1 containers: [2d52356b4d44] I0221 09:08:23.454820 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0221 09:08:23.487796 442801 logs.go:274] 1 containers: [3eab59e55df1] I0221 09:08:23.487856 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0221 09:08:23.521159 442801 logs.go:274] 1 containers: [6e0b11913ead] I0221 09:08:23.521229 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0221 09:08:23.554304 442801 logs.go:274] 1 containers: [b198c3fa1558] I0221 09:08:23.554365 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0221 09:08:23.586487 442801 logs.go:274] 0 containers: [] W0221 09:08:23.586516 442801 logs.go:276] No container was found matching "kubernetes-dashboard" I0221 09:08:23.586570 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0221 09:08:23.619535 442801 logs.go:274] 1 containers: [987fc4d25f59] I0221 09:08:23.619609 442801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0221 09:08:23.653221 442801 logs.go:274] 1 containers: [9da67fbcae63] I0221 09:08:23.653257 442801 logs.go:123] Gathering logs for Docker ... I0221 09:08:23.653267 442801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0221 09:08:23.672016 442801 logs.go:123] Gathering logs for dmesg ... I0221 09:08:23.672053 442801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0221 09:08:23.701424 442801 logs.go:123] Gathering logs for kube-apiserver [22f36e8efd01] ... I0221 09:08:23.701468 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22f36e8efd01" I0221 09:08:23.743991 442801 logs.go:123] Gathering logs for coredns [3eab59e55df1] ... I0221 09:08:23.744028 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3eab59e55df1" I0221 09:08:23.780569 442801 logs.go:123] Gathering logs for kube-controller-manager [9da67fbcae63] ... I0221 09:08:23.780619 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9da67fbcae63" I0221 09:08:23.827784 442801 logs.go:123] Gathering logs for kube-proxy [b198c3fa1558] ... I0221 09:08:23.827817 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b198c3fa1558" I0221 09:08:23.864039 442801 logs.go:123] Gathering logs for storage-provisioner [987fc4d25f59] ... I0221 09:08:23.864066 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 987fc4d25f59" I0221 09:08:23.898581 442801 logs.go:123] Gathering logs for container status ... I0221 09:08:23.898611 442801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0221 09:08:23.929719 442801 logs.go:123] Gathering logs for kubelet ... I0221 09:08:23.929752 442801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0221 09:08:23.993562 442801 logs.go:123] Gathering logs for describe nodes ... I0221 09:08:23.993601 442801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0221 09:08:24.072192 442801 logs.go:123] Gathering logs for etcd [2d52356b4d44] ... I0221 09:08:24.072221 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d52356b4d44" I0221 09:08:24.140746 442801 logs.go:123] Gathering logs for kube-scheduler [6e0b11913ead] ... I0221 09:08:24.140783 442801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0b11913ead" I0221 09:08:26.688110 442801 system_pods.go:59] 7 kube-system pods found I0221 09:08:26.688145 442801 system_pods.go:61] "coredns-64897985d-mr75l" [0cfd24b7-95f1-482c-bcb1-3beb08eebcac] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:08:26.688151 442801 system_pods.go:61] "etcd-enable-default-cni-20220221084933-6550" [dfe7d3c6-2aee-415d-a22b-2f35c061c3c6] Running I0221 09:08:26.688156 442801 system_pods.go:61] "kube-apiserver-enable-default-cni-20220221084933-6550" [d2a36bb5-d5a4-48b0-b8ea-12bbe483aa51] Running I0221 09:08:26.688160 442801 system_pods.go:61] "kube-controller-manager-enable-default-cni-20220221084933-6550" [f17938b7-182f-4b24-b475-c222cdd5babc] Running I0221 09:08:26.688165 442801 system_pods.go:61] "kube-proxy-z67wt" [5988151c-b7ae-4c8d-9095-09aeb868ab3c] Running I0221 09:08:26.688173 442801 system_pods.go:61] "kube-scheduler-enable-default-cni-20220221084933-6550" [0d16e06a-b0a2-4266-bca7-1f5d7e5fc9a7] Running I0221 09:08:26.688180 442801 system_pods.go:61] "storage-provisioner" [8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:08:26.688185 442801 system_pods.go:74] duration metric: took 3.300751331s to wait for pod list to return data ... I0221 09:08:26.688204 442801 default_sa.go:34] waiting for default service account to be created ... I0221 09:08:26.690563 442801 default_sa.go:45] found service account: "default" I0221 09:08:26.690583 442801 default_sa.go:55] duration metric: took 2.374957ms for default service account to be created ... I0221 09:08:26.690589 442801 system_pods.go:116] waiting for k8s-apps to be running ... I0221 09:08:26.694728 442801 system_pods.go:86] 7 kube-system pods found I0221 09:08:26.694761 442801 system_pods.go:89] "coredns-64897985d-mr75l" [0cfd24b7-95f1-482c-bcb1-3beb08eebcac] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:08:26.694771 442801 system_pods.go:89] "etcd-enable-default-cni-20220221084933-6550" [dfe7d3c6-2aee-415d-a22b-2f35c061c3c6] Running I0221 09:08:26.694781 442801 system_pods.go:89] "kube-apiserver-enable-default-cni-20220221084933-6550" [d2a36bb5-d5a4-48b0-b8ea-12bbe483aa51] Running I0221 09:08:26.694788 442801 system_pods.go:89] "kube-controller-manager-enable-default-cni-20220221084933-6550" [f17938b7-182f-4b24-b475-c222cdd5babc] Running I0221 09:08:26.694798 442801 system_pods.go:89] "kube-proxy-z67wt" [5988151c-b7ae-4c8d-9095-09aeb868ab3c] Running I0221 09:08:26.694806 442801 system_pods.go:89] "kube-scheduler-enable-default-cni-20220221084933-6550" [0d16e06a-b0a2-4266-bca7-1f5d7e5fc9a7] Running I0221 09:08:26.694821 442801 system_pods.go:89] "storage-provisioner" [8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:08:26.694831 442801 system_pods.go:126] duration metric: took 4.238216ms to wait for k8s-apps to be running ... I0221 09:08:26.694840 442801 system_svc.go:44] waiting for kubelet service to be running .... I0221 09:08:26.694893 442801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0221 09:08:26.705033 442801 system_svc.go:56] duration metric: took 10.186494ms WaitForService to wait for kubelet. I0221 09:08:26.705054 442801 kubeadm.go:548] duration metric: took 4m11.957986174s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ... I0221 09:08:26.705090 442801 node_conditions.go:102] verifying NodePressure condition ... I0221 09:08:26.708537 442801 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki I0221 09:08:26.708562 442801 node_conditions.go:123] node cpu capacity is 8 I0221 09:08:26.708574 442801 node_conditions.go:105] duration metric: took 3.473833ms to run NodePressure ... I0221 09:08:26.708582 442801 start.go:213] waiting for startup goroutines ... I0221 09:08:26.743675 442801 start.go:496] kubectl: 1.23.4, cluster: 1.23.4 (minor skew: 0) I0221 09:08:26.746370 442801 out.go:176] * Done! kubectl is now configured to use "enable-default-cni-20220221084933-6550" cluster and "default" namespace by default I0221 09:08:26.532003 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:28.532271 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:27.955843 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:08:29.957347 450843 pod_ready.go:102] pod "coredns-64897985d-7jshp" in "kube-system" namespace has status "Ready":"False" I0221 09:08:30.461783 450843 pod_ready.go:81] duration metric: took 4m0.019873237s waiting for pod "coredns-64897985d-7jshp" in "kube-system" namespace to be "Ready" ... E0221 09:08:30.461805 450843 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition I0221 09:08:30.461815 450843 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-tl8l4" in "kube-system" namespace to be "Ready" ... I0221 09:08:30.464015 450843 pod_ready.go:97] error getting pod "coredns-64897985d-tl8l4" in "kube-system" namespace (skipping!): pods "coredns-64897985d-tl8l4" not found I0221 09:08:30.464043 450843 pod_ready.go:81] duration metric: took 2.221437ms waiting for pod "coredns-64897985d-tl8l4" in "kube-system" namespace to be "Ready" ... E0221 09:08:30.464052 450843 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-64897985d-tl8l4" in "kube-system" namespace (skipping!): pods "coredns-64897985d-tl8l4" not found I0221 09:08:30.464060 450843 pod_ready.go:78] waiting up to 5m0s for pod "etcd-bridge-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:08:30.468684 450843 pod_ready.go:92] pod "etcd-bridge-20220221084933-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:08:30.468703 450843 pod_ready.go:81] duration metric: took 4.62867ms waiting for pod "etcd-bridge-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:08:30.468712 450843 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-bridge-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:08:30.473506 450843 pod_ready.go:92] pod "kube-apiserver-bridge-20220221084933-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:08:30.473534 450843 pod_ready.go:81] duration metric: took 4.815616ms waiting for pod "kube-apiserver-bridge-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:08:30.473547 450843 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-bridge-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:08:30.655362 450843 pod_ready.go:92] pod "kube-controller-manager-bridge-20220221084933-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:08:30.655389 450843 pod_ready.go:81] duration metric: took 181.833546ms waiting for pod "kube-controller-manager-bridge-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:08:30.655404 450843 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-pzvfl" in "kube-system" namespace to be "Ready" ... I0221 09:08:30.533538 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:33.032221 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:35.032654 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:31.055302 450843 pod_ready.go:92] pod "kube-proxy-pzvfl" in "kube-system" namespace has status "Ready":"True" I0221 09:08:31.055329 450843 pod_ready.go:81] duration metric: took 399.916434ms waiting for pod "kube-proxy-pzvfl" in "kube-system" namespace to be "Ready" ... I0221 09:08:31.055341 450843 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-bridge-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:08:31.454924 450843 pod_ready.go:92] pod "kube-scheduler-bridge-20220221084933-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:08:31.454951 450843 pod_ready.go:81] duration metric: took 399.602576ms waiting for pod "kube-scheduler-bridge-20220221084933-6550" in "kube-system" namespace to be "Ready" ... I0221 09:08:31.454961 450843 pod_ready.go:38] duration metric: took 4m1.022736723s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 09:08:31.454988 450843 api_server.go:51] waiting for apiserver process to appear ... I0221 09:08:31.455055 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0221 09:08:31.491695 450843 logs.go:274] 1 containers: [6a850a90d786] I0221 09:08:31.491756 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0221 09:08:31.531102 450843 logs.go:274] 1 containers: [5eb857f7738e] I0221 09:08:31.531209 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0221 09:08:31.571986 450843 logs.go:274] 1 containers: [8eb32092067f] I0221 09:08:31.572064 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0221 09:08:31.603731 450843 logs.go:274] 1 containers: [6e69145b30ad] I0221 09:08:31.603809 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0221 09:08:31.642830 450843 logs.go:274] 1 containers: [cd31aa9c0c74] I0221 09:08:31.642911 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0221 09:08:31.680618 450843 logs.go:274] 0 containers: [] W0221 09:08:31.680640 450843 logs.go:276] No container was found matching "kubernetes-dashboard" I0221 09:08:31.680695 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0221 09:08:31.716281 450843 logs.go:274] 2 containers: [dedfecc4ece7 40d03e6cd1a3] I0221 09:08:31.716379 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0221 09:08:31.757092 450843 logs.go:274] 1 containers: [d092f7171bc6] I0221 09:08:31.757132 450843 logs.go:123] Gathering logs for kubelet ... I0221 09:08:31.757143 450843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0221 09:08:31.825700 450843 logs.go:123] Gathering logs for dmesg ... I0221 09:08:31.825746 450843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0221 09:08:31.858480 450843 logs.go:123] Gathering logs for coredns [8eb32092067f] ... I0221 09:08:31.858519 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eb32092067f" I0221 09:08:31.896488 450843 logs.go:123] Gathering logs for storage-provisioner [40d03e6cd1a3] ... I0221 09:08:31.896515 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40d03e6cd1a3" I0221 09:08:31.936833 450843 logs.go:123] Gathering logs for kube-controller-manager [d092f7171bc6] ... I0221 09:08:31.936864 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d092f7171bc6" I0221 09:08:31.986267 450843 logs.go:123] Gathering logs for Docker ... I0221 09:08:31.986300 450843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0221 09:08:32.003785 450843 logs.go:123] Gathering logs for container status ... I0221 09:08:32.003828 450843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0221 09:08:32.034717 450843 logs.go:123] Gathering logs for describe nodes ... I0221 09:08:32.034746 450843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0221 09:08:32.113035 450843 logs.go:123] Gathering logs for kube-apiserver [6a850a90d786] ... I0221 09:08:32.113066 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a850a90d786" I0221 09:08:32.154757 450843 logs.go:123] Gathering logs for etcd [5eb857f7738e] ... I0221 09:08:32.154788 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5eb857f7738e" I0221 09:08:32.195113 450843 logs.go:123] Gathering logs for kube-scheduler [6e69145b30ad] ... I0221 09:08:32.195148 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e69145b30ad" I0221 09:08:32.238193 450843 logs.go:123] Gathering logs for kube-proxy [cd31aa9c0c74] ... I0221 09:08:32.238227 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd31aa9c0c74" I0221 09:08:32.276303 450843 logs.go:123] Gathering logs for storage-provisioner [dedfecc4ece7] ... I0221 09:08:32.276341 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dedfecc4ece7" I0221 09:08:34.817047 450843 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0221 09:08:34.840119 450843 api_server.go:71] duration metric: took 4m4.619098866s to wait for apiserver process to appear ... I0221 09:08:34.840149 450843 api_server.go:87] waiting for apiserver healthz status ... I0221 09:08:34.840199 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0221 09:08:34.875740 450843 logs.go:274] 1 containers: [6a850a90d786] I0221 09:08:34.875812 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0221 09:08:34.910872 450843 logs.go:274] 1 containers: [5eb857f7738e] I0221 09:08:34.910947 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0221 09:08:34.944892 450843 logs.go:274] 1 containers: [8eb32092067f] I0221 09:08:34.944960 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0221 09:08:34.977163 450843 logs.go:274] 1 containers: [6e69145b30ad] I0221 09:08:34.977221 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0221 09:08:35.010985 450843 logs.go:274] 1 containers: [cd31aa9c0c74] I0221 09:08:35.011097 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0221 09:08:35.046325 450843 logs.go:274] 0 containers: [] W0221 09:08:35.046354 450843 logs.go:276] No container was found matching "kubernetes-dashboard" I0221 09:08:35.046395 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0221 09:08:35.079716 450843 logs.go:274] 1 containers: [dedfecc4ece7] I0221 09:08:35.079795 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0221 09:08:35.113823 450843 logs.go:274] 1 containers: [d092f7171bc6] I0221 09:08:35.113862 450843 logs.go:123] Gathering logs for describe nodes ... I0221 09:08:35.113877 450843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0221 09:08:35.189171 450843 logs.go:123] Gathering logs for coredns [8eb32092067f] ... I0221 09:08:35.189199 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eb32092067f" I0221 09:08:35.225009 450843 logs.go:123] Gathering logs for kube-scheduler [6e69145b30ad] ... I0221 09:08:35.225039 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e69145b30ad" I0221 09:08:35.271029 450843 logs.go:123] Gathering logs for kube-proxy [cd31aa9c0c74] ... I0221 09:08:35.271066 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd31aa9c0c74" I0221 09:08:35.307725 450843 logs.go:123] Gathering logs for kube-controller-manager [d092f7171bc6] ... I0221 09:08:35.307772 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d092f7171bc6" I0221 09:08:35.355496 450843 logs.go:123] Gathering logs for kubelet ... I0221 09:08:35.355531 450843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0221 09:08:35.417146 450843 logs.go:123] Gathering logs for kube-apiserver [6a850a90d786] ... I0221 09:08:35.417244 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a850a90d786" I0221 09:08:35.459560 450843 logs.go:123] Gathering logs for etcd [5eb857f7738e] ... I0221 09:08:35.459598 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5eb857f7738e" I0221 09:08:35.498980 450843 logs.go:123] Gathering logs for storage-provisioner [dedfecc4ece7] ... I0221 09:08:35.499046 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dedfecc4ece7" I0221 09:08:35.536957 450843 logs.go:123] Gathering logs for Docker ... I0221 09:08:35.536986 450843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0221 09:08:35.553551 450843 logs.go:123] Gathering logs for container status ... I0221 09:08:35.553587 450843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0221 09:08:35.585466 450843 logs.go:123] Gathering logs for dmesg ... I0221 09:08:35.585502 450843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0221 09:08:37.532234 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:40.032852 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:38.116914 450843 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ... I0221 09:08:38.122687 450843 api_server.go:266] https://192.168.67.2:8443/healthz returned 200: ok I0221 09:08:38.123855 450843 api_server.go:140] control plane version: v1.23.4 I0221 09:08:38.123880 450843 api_server.go:130] duration metric: took 3.28372628s to wait for apiserver health ... I0221 09:08:38.123889 450843 system_pods.go:43] waiting for kube-system pods to appear ... I0221 09:08:38.123935 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0221 09:08:38.159427 450843 logs.go:274] 1 containers: [6a850a90d786] I0221 09:08:38.159494 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0221 09:08:38.193788 450843 logs.go:274] 1 containers: [5eb857f7738e] I0221 09:08:38.193865 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0221 09:08:38.229739 450843 logs.go:274] 1 containers: [8eb32092067f] I0221 09:08:38.229817 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0221 09:08:38.265319 450843 logs.go:274] 1 containers: [6e69145b30ad] I0221 09:08:38.265402 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0221 09:08:38.299845 450843 logs.go:274] 1 containers: [cd31aa9c0c74] I0221 09:08:38.299913 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0221 09:08:38.335291 450843 logs.go:274] 0 containers: [] W0221 09:08:38.335317 450843 logs.go:276] No container was found matching "kubernetes-dashboard" I0221 09:08:38.335371 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0221 09:08:38.371605 450843 logs.go:274] 1 containers: [dedfecc4ece7] I0221 09:08:38.371697 450843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0221 09:08:38.410349 450843 logs.go:274] 1 containers: [d092f7171bc6] I0221 09:08:38.410384 450843 logs.go:123] Gathering logs for etcd [5eb857f7738e] ... I0221 09:08:38.410398 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5eb857f7738e" I0221 09:08:38.455212 450843 logs.go:123] Gathering logs for kube-scheduler [6e69145b30ad] ... I0221 09:08:38.455258 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e69145b30ad" I0221 09:08:38.521301 450843 logs.go:123] Gathering logs for kube-proxy [cd31aa9c0c74] ... I0221 09:08:38.521339 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd31aa9c0c74" I0221 09:08:38.558468 450843 logs.go:123] Gathering logs for storage-provisioner [dedfecc4ece7] ... I0221 09:08:38.558494 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dedfecc4ece7" I0221 09:08:38.595044 450843 logs.go:123] Gathering logs for kube-controller-manager [d092f7171bc6] ... I0221 09:08:38.595079 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d092f7171bc6" I0221 09:08:38.643023 450843 logs.go:123] Gathering logs for container status ... I0221 09:08:38.643061 450843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0221 09:08:38.673174 450843 logs.go:123] Gathering logs for dmesg ... I0221 09:08:38.673205 450843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0221 09:08:38.705820 450843 logs.go:123] Gathering logs for describe nodes ... I0221 09:08:38.705854 450843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0221 09:08:38.783647 450843 logs.go:123] Gathering logs for kube-apiserver [6a850a90d786] ... I0221 09:08:38.783681 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a850a90d786" I0221 09:08:38.824580 450843 logs.go:123] Gathering logs for coredns [8eb32092067f] ... I0221 09:08:38.824618 450843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eb32092067f" I0221 09:08:38.861663 450843 logs.go:123] Gathering logs for Docker ... I0221 09:08:38.861694 450843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0221 09:08:38.878877 450843 logs.go:123] Gathering logs for kubelet ... I0221 09:08:38.878909 450843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0221 09:08:41.444924 450843 system_pods.go:59] 7 kube-system pods found I0221 09:08:41.444986 450843 system_pods.go:61] "coredns-64897985d-7jshp" [8d3d6c95-cecd-4c5c-b6a5-481f281a9c9c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:08:41.445006 450843 system_pods.go:61] "etcd-bridge-20220221084933-6550" [6405e54f-2102-4390-9bac-d18668f32149] Running I0221 09:08:41.445022 450843 system_pods.go:61] "kube-apiserver-bridge-20220221084933-6550" [4ce115ae-793f-4994-a0be-928e77985675] Running I0221 09:08:41.445034 450843 system_pods.go:61] "kube-controller-manager-bridge-20220221084933-6550" [1e23af6e-a828-4974-ac87-c367b69697d6] Running I0221 09:08:41.445044 450843 system_pods.go:61] "kube-proxy-pzvfl" [1d716cc7-064a-4439-88b1-5d131874760e] Running I0221 09:08:41.445058 450843 system_pods.go:61] "kube-scheduler-bridge-20220221084933-6550" [63fb1f89-2553-4c6d-99a2-fb69ac76690f] Running I0221 09:08:41.445073 450843 system_pods.go:61] "storage-provisioner" [2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:08:41.445084 450843 system_pods.go:74] duration metric: took 3.321189782s to wait for pod list to return data ... I0221 09:08:41.445098 450843 default_sa.go:34] waiting for default service account to be created ... I0221 09:08:41.447570 450843 default_sa.go:45] found service account: "default" I0221 09:08:41.447591 450843 default_sa.go:55] duration metric: took 2.485246ms for default service account to be created ... I0221 09:08:41.447598 450843 system_pods.go:116] waiting for k8s-apps to be running ... I0221 09:08:41.451546 450843 system_pods.go:86] 7 kube-system pods found I0221 09:08:41.451573 450843 system_pods.go:89] "coredns-64897985d-7jshp" [8d3d6c95-cecd-4c5c-b6a5-481f281a9c9c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:08:41.451580 450843 system_pods.go:89] "etcd-bridge-20220221084933-6550" [6405e54f-2102-4390-9bac-d18668f32149] Running I0221 09:08:41.451585 450843 system_pods.go:89] "kube-apiserver-bridge-20220221084933-6550" [4ce115ae-793f-4994-a0be-928e77985675] Running I0221 09:08:41.451589 450843 system_pods.go:89] "kube-controller-manager-bridge-20220221084933-6550" [1e23af6e-a828-4974-ac87-c367b69697d6] Running I0221 09:08:41.451593 450843 system_pods.go:89] "kube-proxy-pzvfl" [1d716cc7-064a-4439-88b1-5d131874760e] Running I0221 09:08:41.451597 450843 system_pods.go:89] "kube-scheduler-bridge-20220221084933-6550" [63fb1f89-2553-4c6d-99a2-fb69ac76690f] Running I0221 09:08:41.451602 450843 system_pods.go:89] "storage-provisioner" [2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:08:41.451612 450843 system_pods.go:126] duration metric: took 4.010324ms to wait for k8s-apps to be running ... I0221 09:08:41.451626 450843 system_svc.go:44] waiting for kubelet service to be running .... I0221 09:08:41.451661 450843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0221 09:08:41.461622 450843 system_svc.go:56] duration metric: took 9.989373ms WaitForService to wait for kubelet. I0221 09:08:41.461652 450843 kubeadm.go:548] duration metric: took 4m11.240635372s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ... I0221 09:08:41.461680 450843 node_conditions.go:102] verifying NodePressure condition ... I0221 09:08:41.464863 450843 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki I0221 09:08:41.464907 450843 node_conditions.go:123] node cpu capacity is 8 I0221 09:08:41.464917 450843 node_conditions.go:105] duration metric: took 3.227765ms to run NodePressure ... I0221 09:08:41.464926 450843 start.go:213] waiting for startup goroutines ... I0221 09:08:41.499178 450843 start.go:496] kubectl: 1.23.4, cluster: 1.23.4 (minor skew: 0) I0221 09:08:41.501675 450843 out.go:176] * Done! kubectl is now configured to use "bridge-20220221084933-6550" cluster and "default" namespace by default I0221 09:08:42.033362 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:44.034484 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:46.532343 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:49.032155 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:51.033187 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:53.532405 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:56.032905 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:08:58.532721 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:09:00.532794 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:09:03.032150 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:09:05.032898 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:09:07.532448 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:09:10.032749 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:09:12.532590 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:09:15.033767 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:09:17.532289 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:09:20.032211 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:09:22.532201 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:09:25.032677 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:09:27.032803 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:09:29.531735 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:09:31.531982 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:09:34.033899 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:09:36.532071 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" I0221 09:09:39.032274 462115 pod_ready.go:102] pod "coredns-64897985d-cx6k8" in "kube-system" namespace has status "Ready":"False" * * ==> Docker <== * -- Logs begin at Mon 2022-02-21 09:02:54 UTC, end at Mon 2022-02-21 09:09:44 UTC. -- Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[214]: time="2022-02-21T09:02:56.244386588Z" level=info msg="Daemon shutdown complete" Feb 21 09:02:56 kindnet-20220221084934-6550 systemd[1]: docker.service: Succeeded. Feb 21 09:02:56 kindnet-20220221084934-6550 systemd[1]: Stopped Docker Application Container Engine. Feb 21 09:02:56 kindnet-20220221084934-6550 systemd[1]: Starting Docker Application Container Engine... Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[460]: time="2022-02-21T09:02:56.334905197Z" level=info msg="Starting up" Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[460]: time="2022-02-21T09:02:56.336896710Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[460]: time="2022-02-21T09:02:56.336924458Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[460]: time="2022-02-21T09:02:56.336951738Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[460]: time="2022-02-21T09:02:56.336962413Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[460]: time="2022-02-21T09:02:56.338038722Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[460]: time="2022-02-21T09:02:56.338061880Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[460]: time="2022-02-21T09:02:56.338075622Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[460]: time="2022-02-21T09:02:56.338086812Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[460]: time="2022-02-21T09:02:56.342479756Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[460]: time="2022-02-21T09:02:56.348140456Z" level=warning msg="Your kernel does not support CPU realtime scheduler" Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[460]: time="2022-02-21T09:02:56.348164146Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[460]: time="2022-02-21T09:02:56.348169603Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[460]: time="2022-02-21T09:02:56.348328051Z" level=info msg="Loading containers: start." Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[460]: time="2022-02-21T09:02:56.430255625Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[460]: time="2022-02-21T09:02:56.466301525Z" level=info msg="Loading containers: done." Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[460]: time="2022-02-21T09:02:56.478724309Z" level=info msg="Docker daemon" commit=459d0df graphdriver(s)=overlay2 version=20.10.12 Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[460]: time="2022-02-21T09:02:56.478782822Z" level=info msg="Daemon has completed initialization" Feb 21 09:02:56 kindnet-20220221084934-6550 systemd[1]: Started Docker Application Container Engine. Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[460]: time="2022-02-21T09:02:56.496951660Z" level=info msg="API listen on [::]:2376" Feb 21 09:02:56 kindnet-20220221084934-6550 dockerd[460]: time="2022-02-21T09:02:56.500973648Z" level=info msg="API listen on /var/run/docker.sock" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 7a3bfe996b397 k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 6 minutes ago Running dnsutils 0 c3ef6cecbc94f f2ab40995bb27 a4ca41631cc7a 6 minutes ago Running coredns 0 4f60f93c15694 4a4b744690f25 6e38f40d628db 6 minutes ago Running storage-provisioner 0 2f181e31e7536 2ed4ff0a0f504 kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c 6 minutes ago Running kindnet-cni 0 4f6e7ea40b1b4 d411d70ae4d28 2114245ec4d6b 6 minutes ago Running kube-proxy 0 8ff7ab628ae6c 30bfd023cee4b 62930710c9634 6 minutes ago Running kube-apiserver 0 419ab81f59e8d d3125748aff71 aceacb6244f9f 6 minutes ago Running kube-scheduler 0 3a2ef27de0509 402525f4b6a6b 25444908517a5 6 minutes ago Running kube-controller-manager 0 78fc95e5b159d 026fb6380dcde 25f8c7f3da61c 6 minutes ago Running etcd 0 1bf7e091ed075 * * ==> coredns [f2ab40995bb2] <== * .:53 [INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031 CoreDNS-1.8.6 linux/amd64, go1.17.1, 13a9191 * * ==> describe nodes <== * Name: kindnet-20220221084934-6550 Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=kindnet-20220221084934-6550 kubernetes.io/os=linux minikube.k8s.io/commit=9a4c39b14752448ee8b5da7f8bb397c2a16d9ea9 minikube.k8s.io/name=kindnet-20220221084934-6550 minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2022_02_21T09_03_11_0700 minikube.k8s.io/version=v1.25.1 node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 21 Feb 2022 09:03:07 +0000 Taints: Unschedulable: false Lease: HolderIdentity: kindnet-20220221084934-6550 AcquireTime: RenewTime: Mon, 21 Feb 2022 09:09:39 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Mon, 21 Feb 2022 09:09:18 +0000 Mon, 21 Feb 2022 09:03:04 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Mon, 21 Feb 2022 09:09:18 +0000 Mon, 21 Feb 2022 09:03:04 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Mon, 21 Feb 2022 09:09:18 +0000 Mon, 21 Feb 2022 09:03:04 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Mon, 21 Feb 2022 09:09:18 +0000 Mon, 21 Feb 2022 09:03:31 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.49.2 Hostname: kindnet-20220221084934-6550 Capacity: cpu: 8 ephemeral-storage: 304695084Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32874648Ki pods: 110 Allocatable: cpu: 8 ephemeral-storage: 304695084Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32874648Ki pods: 110 System Info: Machine ID: b6a262faae404a5db719705fd34b5c8b System UUID: 2787a248-8102-41be-94ef-882a836b4e46 Boot ID: 36f9c729-2a96-4807-bb74-314dc2113999 Kernel Version: 5.11.0-1029-gcp OS Image: Ubuntu 20.04.2 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.12 Kubelet Version: v1.23.4 Kube-Proxy Version: v1.23.4 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (9 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- default netcat-668db85669-lcmt9 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m4s kube-system coredns-64897985d-svjnh 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 6m21s kube-system etcd-kindnet-20220221084934-6550 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 6m33s kube-system kindnet-b7vpv 100m (1%!)(MISSING) 100m (1%!)(MISSING) 50Mi (0%!)(MISSING) 50Mi (0%!)(MISSING) 6m21s kube-system kube-apiserver-kindnet-20220221084934-6550 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m33s kube-system kube-controller-manager-kindnet-20220221084934-6550 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m33s kube-system kube-proxy-hvpn5 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m21s kube-system kube-scheduler-kindnet-20220221084934-6550 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m33s kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m20s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 850m (10%!)(MISSING) 100m (1%!)(MISSING) memory 220Mi (0%!)(MISSING) 220Mi (0%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 6m21s kube-proxy Normal Starting 6m34s kubelet Starting kubelet. Normal NodeHasSufficientMemory 6m34s kubelet Node kindnet-20220221084934-6550 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 6m34s kubelet Node kindnet-20220221084934-6550 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 6m34s kubelet Node kindnet-20220221084934-6550 status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 6m33s kubelet Updated Node Allocatable limit across pods Normal NodeReady 6m13s kubelet Node kindnet-20220221084934-6550 status is now: NodeReady * * ==> dmesg <== * [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +0.807956] IPv4: martian source 10.244.0.223 from 10.244.0.3, on dev br-5d96ab4d6b1a [ +0.000006] ll header: 00000000: 02 42 58 0b cb 43 02 42 c0 a8 31 02 08 00 [ +0.215904] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +1.019944] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +0.500012] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 [ +1.003841] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 [ +1.023942] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 [ +0.427998] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +0.807964] IPv4: martian source 10.244.0.223 from 10.244.0.3, on dev br-5d96ab4d6b1a [ +0.000009] ll header: 00000000: 02 42 58 0b cb 43 02 42 c0 a8 31 02 08 00 [ +0.203925] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +1.027893] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +3.491828] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000008] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 [ +1.015843] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 * * ==> etcd [026fb6380dcd] <== * {"level":"info","ts":"2022-02-21T09:03:04.712Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:kindnet-20220221084934-6550 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"} {"level":"info","ts":"2022-02-21T09:03:04.712Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T09:03:04.712Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2022-02-21T09:03:04.712Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"} {"level":"info","ts":"2022-02-21T09:03:04.712Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"} {"level":"info","ts":"2022-02-21T09:03:04.713Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T09:03:04.713Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T09:03:04.714Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"} {"level":"info","ts":"2022-02-21T09:03:04.714Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2022-02-21T09:03:04.716Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"} {"level":"info","ts":"2022-02-21T09:03:04.716Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"} {"level":"warn","ts":"2022-02-21T09:03:34.975Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"225.435233ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-02-21T09:03:34.975Z","caller":"traceutil/trace.go:171","msg":"trace[1189977891] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:497; }","duration":"225.575766ms","start":"2022-02-21T09:03:34.750Z","end":"2022-02-21T09:03:34.975Z","steps":["trace[1189977891] 'range keys from in-memory index tree' (duration: 225.337024ms)"],"step_count":1} {"level":"warn","ts":"2022-02-21T09:03:34.975Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"319.726472ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:1 size:4667"} {"level":"info","ts":"2022-02-21T09:03:34.975Z","caller":"traceutil/trace.go:171","msg":"trace[1649275344] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:1; response_revision:497; }","duration":"319.918366ms","start":"2022-02-21T09:03:34.655Z","end":"2022-02-21T09:03:34.975Z","steps":["trace[1649275344] 'range keys from in-memory index tree' (duration: 319.584017ms)"],"step_count":1} {"level":"warn","ts":"2022-02-21T09:03:34.975Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-02-21T09:03:34.655Z","time spent":"319.994394ms","remote":"127.0.0.1:33264","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":4691,"request content":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" "} {"level":"warn","ts":"2022-02-21T09:03:56.870Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"120.450404ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-02-21T09:03:56.870Z","caller":"traceutil/trace.go:171","msg":"trace[1212159472] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:540; }","duration":"120.560675ms","start":"2022-02-21T09:03:56.750Z","end":"2022-02-21T09:03:56.870Z","steps":["trace[1212159472] 'agreement among raft nodes before linearized reading' (duration: 28.876562ms)","trace[1212159472] 'range keys from in-memory index tree' (duration: 91.568692ms)"],"step_count":2} {"level":"info","ts":"2022-02-21T09:03:58.855Z","caller":"traceutil/trace.go:171","msg":"trace[883194937] linearizableReadLoop","detail":"{readStateIndex:562; appliedIndex:562; }","duration":"105.270775ms","start":"2022-02-21T09:03:58.750Z","end":"2022-02-21T09:03:58.855Z","steps":["trace[883194937] 'read index received' (duration: 105.259046ms)","trace[883194937] 'applied index is now lower than readState.Index' (duration: 10.307µs)"],"step_count":2} {"level":"warn","ts":"2022-02-21T09:03:58.957Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"206.591649ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-02-21T09:03:58.957Z","caller":"traceutil/trace.go:171","msg":"trace[1031669119] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:541; }","duration":"206.688911ms","start":"2022-02-21T09:03:58.750Z","end":"2022-02-21T09:03:58.957Z","steps":["trace[1031669119] 'agreement among raft nodes before linearized reading' (duration: 105.388231ms)","trace[1031669119] 'range keys from in-memory index tree' (duration: 101.17504ms)"],"step_count":2} {"level":"info","ts":"2022-02-21T09:03:59.382Z","caller":"traceutil/trace.go:171","msg":"trace[1952457421] transaction","detail":"{read_only:false; response_revision:542; number_of_response:1; }","duration":"160.781985ms","start":"2022-02-21T09:03:59.221Z","end":"2022-02-21T09:03:59.382Z","steps":["trace[1952457421] 'process raft request' (duration: 63.877871ms)","trace[1952457421] 'compare' (duration: 96.783784ms)"],"step_count":2} {"level":"info","ts":"2022-02-21T09:04:00.989Z","caller":"traceutil/trace.go:171","msg":"trace[426299990] linearizableReadLoop","detail":"{readStateIndex:565; appliedIndex:565; }","duration":"239.672823ms","start":"2022-02-21T09:04:00.749Z","end":"2022-02-21T09:04:00.989Z","steps":["trace[426299990] 'read index received' (duration: 239.664183ms)","trace[426299990] 'applied index is now lower than readState.Index' (duration: 7.391µs)"],"step_count":2} {"level":"warn","ts":"2022-02-21T09:04:00.991Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"242.264699ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-02-21T09:04:00.991Z","caller":"traceutil/trace.go:171","msg":"trace[1618152808] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:543; }","duration":"242.338947ms","start":"2022-02-21T09:04:00.749Z","end":"2022-02-21T09:04:00.991Z","steps":["trace[1618152808] 'agreement among raft nodes before linearized reading' (duration: 239.821572ms)"],"step_count":1} * * ==> kernel <== * 09:09:45 up 52 min, 0 users, load average: 1.17, 2.17, 2.85 Linux kindnet-20220221084934-6550 5.11.0-1029-gcp #33~20.04.3-Ubuntu SMP Tue Jan 18 12:03:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.2 LTS" * * ==> kube-apiserver [30bfd023cee4] <== * I0221 09:03:07.726604 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0221 09:03:07.726640 1 cache.go:39] Caches are synced for AvailableConditionController controller I0221 09:03:07.733603 1 apf_controller.go:322] Running API Priority and Fairness config worker I0221 09:03:07.749557 1 shared_informer.go:247] Caches are synced for node_authorizer I0221 09:03:08.625861 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0221 09:03:08.632794 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0221 09:03:08.634094 1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000 I0221 09:03:08.637287 1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000 I0221 09:03:08.637308 1 storage_scheduling.go:109] all system priority classes are created successfully or already exist. I0221 09:03:09.089112 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0221 09:03:09.137979 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0221 09:03:09.225516 1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1] W0221 09:03:09.231067 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2] I0221 09:03:09.232625 1 controller.go:611] quota admission added evaluator for: endpoints I0221 09:03:09.237210 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io I0221 09:03:09.767749 1 controller.go:611] quota admission added evaluator for: serviceaccounts I0221 09:03:10.576047 1 controller.go:611] quota admission added evaluator for: deployments.apps I0221 09:03:10.584875 1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10] I0221 09:03:10.595885 1 controller.go:611] quota admission added evaluator for: daemonsets.apps I0221 09:03:10.815301 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io I0221 09:03:23.072677 1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps I0221 09:03:23.523367 1 controller.go:611] quota admission added evaluator for: replicasets.apps I0221 09:03:23.974139 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io I0221 09:03:40.627749 1 alloc.go:329] "allocated clusterIPs" service="default/netcat" clusterIPs=map[IPv4:10.108.109.54] E0221 09:07:44.715564 1 upgradeaware.go:409] Error proxying data from client to backend: write tcp 192.168.49.2:46294->192.168.49.2:10250: write: broken pipe * * ==> kube-controller-manager [402525f4b6a6] <== * I0221 09:03:22.619439 1 shared_informer.go:247] Caches are synced for attach detach I0221 09:03:22.619472 1 shared_informer.go:247] Caches are synced for TTL I0221 09:03:22.620573 1 shared_informer.go:247] Caches are synced for persistent volume I0221 09:03:22.621752 1 shared_informer.go:247] Caches are synced for TTL after finished I0221 09:03:22.774867 1 shared_informer.go:247] Caches are synced for deployment I0221 09:03:22.796439 1 shared_informer.go:247] Caches are synced for resource quota I0221 09:03:22.807569 1 shared_informer.go:247] Caches are synced for disruption I0221 09:03:22.807598 1 disruption.go:371] Sending events to api server. I0221 09:03:22.819774 1 shared_informer.go:247] Caches are synced for ReplicaSet I0221 09:03:22.822180 1 shared_informer.go:247] Caches are synced for resource quota I0221 09:03:23.078397 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-hvpn5" I0221 09:03:23.080718 1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-b7vpv" I0221 09:03:23.241044 1 shared_informer.go:247] Caches are synced for garbage collector I0221 09:03:23.268409 1 shared_informer.go:247] Caches are synced for garbage collector I0221 09:03:23.268431 1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0221 09:03:23.525737 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2" I0221 09:03:23.625809 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-t6244" I0221 09:03:23.631868 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-svjnh" I0221 09:03:23.808344 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1" I0221 09:03:23.819880 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-t6244" I0221 09:03:32.547490 1 node_lifecycle_controller.go:1190] Controller detected that some Nodes are Ready. Exiting master disruption mode. I0221 09:03:40.621039 1 event.go:294] "Event occurred" object="default/netcat" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set netcat-668db85669 to 1" I0221 09:03:40.634471 1 event.go:294] "Event occurred" object="default/netcat-668db85669" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: netcat-668db85669-lcmt9" W0221 09:03:40.638491 1 endpointslice_controller.go:306] Error syncing endpoint slices for service "default/netcat", retrying. Error: EndpointSlice informer cache is out of date I0221 09:03:40.642458 1 event.go:294] "Event occurred" object="netcat" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service default/netcat: endpoints \"netcat\" already exists" * * ==> kube-proxy [d411d70ae4d2] <== * I0221 09:03:23.945814 1 node.go:163] Successfully retrieved node IP: 192.168.49.2 I0221 09:03:23.945883 1 server_others.go:138] "Detected node IP" address="192.168.49.2" I0221 09:03:23.945928 1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode="" I0221 09:03:23.970776 1 server_others.go:206] "Using iptables Proxier" I0221 09:03:23.970823 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4 I0221 09:03:23.970834 1 server_others.go:214] "Creating dualStackProxier for iptables" I0221 09:03:23.970852 1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6" I0221 09:03:23.971373 1 server.go:656] "Version info" version="v1.23.4" I0221 09:03:23.972242 1 config.go:317] "Starting service config controller" I0221 09:03:23.972263 1 shared_informer.go:240] Waiting for caches to sync for service config I0221 09:03:23.972287 1 config.go:226] "Starting endpoint slice config controller" I0221 09:03:23.972291 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I0221 09:03:24.072973 1 shared_informer.go:247] Caches are synced for endpoint slice config I0221 09:03:24.072984 1 shared_informer.go:247] Caches are synced for service config * * ==> kube-scheduler [d3125748aff7] <== * W0221 09:03:07.723837 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0221 09:03:07.723865 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0221 09:03:07.723891 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0221 09:03:07.723910 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0221 09:03:08.546968 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0221 09:03:08.547050 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0221 09:03:08.586371 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0221 09:03:08.586462 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0221 09:03:08.673241 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0221 09:03:08.673286 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope W0221 09:03:08.676239 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0221 09:03:08.676274 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0221 09:03:08.704454 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0221 09:03:08.704501 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0221 09:03:08.704604 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0221 09:03:08.704639 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0221 09:03:08.750624 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0221 09:03:08.750661 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0221 09:03:08.807066 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0221 09:03:08.807104 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0221 09:03:08.903652 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0221 09:03:08.903685 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0221 09:03:08.958559 1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0221 09:03:08.958591 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" I0221 09:03:11.217744 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Mon 2022-02-21 09:02:54 UTC, end at Mon 2022-02-21 09:09:45 UTC. -- Feb 21 09:03:22 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:22.571383 1938 kuberuntime_manager.go:1098] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24" Feb 21 09:03:22 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:22.571833 1938 docker_service.go:364] "Docker cri received runtime config" runtimeConfig="&RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}" Feb 21 09:03:22 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:22.571998 1938 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24" Feb 21 09:03:22 kindnet-20220221084934-6550 kubelet[1938]: E0221 09:03:22.580345 1938 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" Feb 21 09:03:23 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:23.084009 1938 topology_manager.go:200] "Topology Admit Handler" Feb 21 09:03:23 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:23.086355 1938 topology_manager.go:200] "Topology Admit Handler" Feb 21 09:03:23 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:23.202185 1938 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/70703c09-41bc-4c02-9ccf-df45333fbc70-cni-cfg\") pod \"kindnet-b7vpv\" (UID: \"70703c09-41bc-4c02-9ccf-df45333fbc70\") " pod="kube-system/kindnet-b7vpv" Feb 21 09:03:23 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:23.202259 1938 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eac36e6a-fd59-49e4-a536-c2aa610984ef-lib-modules\") pod \"kube-proxy-hvpn5\" (UID: \"eac36e6a-fd59-49e4-a536-c2aa610984ef\") " pod="kube-system/kube-proxy-hvpn5" Feb 21 09:03:23 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:23.202293 1938 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70703c09-41bc-4c02-9ccf-df45333fbc70-xtables-lock\") pod \"kindnet-b7vpv\" (UID: \"70703c09-41bc-4c02-9ccf-df45333fbc70\") " pod="kube-system/kindnet-b7vpv" Feb 21 09:03:23 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:23.202387 1938 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlmwp\" (UniqueName: \"kubernetes.io/projected/70703c09-41bc-4c02-9ccf-df45333fbc70-kube-api-access-nlmwp\") pod \"kindnet-b7vpv\" (UID: \"70703c09-41bc-4c02-9ccf-df45333fbc70\") " pod="kube-system/kindnet-b7vpv" Feb 21 09:03:23 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:23.202470 1938 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70703c09-41bc-4c02-9ccf-df45333fbc70-lib-modules\") pod \"kindnet-b7vpv\" (UID: \"70703c09-41bc-4c02-9ccf-df45333fbc70\") " pod="kube-system/kindnet-b7vpv" Feb 21 09:03:23 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:23.202595 1938 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/eac36e6a-fd59-49e4-a536-c2aa610984ef-kube-proxy\") pod \"kube-proxy-hvpn5\" (UID: \"eac36e6a-fd59-49e4-a536-c2aa610984ef\") " pod="kube-system/kube-proxy-hvpn5" Feb 21 09:03:23 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:23.202647 1938 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eac36e6a-fd59-49e4-a536-c2aa610984ef-xtables-lock\") pod \"kube-proxy-hvpn5\" (UID: \"eac36e6a-fd59-49e4-a536-c2aa610984ef\") " pod="kube-system/kube-proxy-hvpn5" Feb 21 09:03:23 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:23.202684 1938 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ncqr\" (UniqueName: \"kubernetes.io/projected/eac36e6a-fd59-49e4-a536-c2aa610984ef-kube-api-access-8ncqr\") pod \"kube-proxy-hvpn5\" (UID: \"eac36e6a-fd59-49e4-a536-c2aa610984ef\") " pod="kube-system/kube-proxy-hvpn5" Feb 21 09:03:25 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:25.721597 1938 cni.go:240] "Unable to update cni config" err="no networks found in /etc/cni/net.mk" Feb 21 09:03:26 kindnet-20220221084934-6550 kubelet[1938]: E0221 09:03:26.242713 1938 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" Feb 21 09:03:31 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:31.791100 1938 topology_manager.go:200] "Topology Admit Handler" Feb 21 09:03:31 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:31.791382 1938 topology_manager.go:200] "Topology Admit Handler" Feb 21 09:03:31 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:31.959217 1938 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf4kv\" (UniqueName: \"kubernetes.io/projected/84ae4f8f-baa9-4b02-a1f6-5d9026e71769-kube-api-access-nf4kv\") pod \"storage-provisioner\" (UID: \"84ae4f8f-baa9-4b02-a1f6-5d9026e71769\") " pod="kube-system/storage-provisioner" Feb 21 09:03:31 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:31.959297 1938 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd666a7b-1888-4f96-8615-0a625ca7c35a-config-volume\") pod \"coredns-64897985d-svjnh\" (UID: \"cd666a7b-1888-4f96-8615-0a625ca7c35a\") " pod="kube-system/coredns-64897985d-svjnh" Feb 21 09:03:31 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:31.959339 1938 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/84ae4f8f-baa9-4b02-a1f6-5d9026e71769-tmp\") pod \"storage-provisioner\" (UID: \"84ae4f8f-baa9-4b02-a1f6-5d9026e71769\") " pod="kube-system/storage-provisioner" Feb 21 09:03:31 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:31.959375 1938 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnssc\" (UniqueName: \"kubernetes.io/projected/cd666a7b-1888-4f96-8615-0a625ca7c35a-kube-api-access-wnssc\") pod \"coredns-64897985d-svjnh\" (UID: \"cd666a7b-1888-4f96-8615-0a625ca7c35a\") " pod="kube-system/coredns-64897985d-svjnh" Feb 21 09:03:40 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:40.639552 1938 topology_manager.go:200] "Topology Admit Handler" Feb 21 09:03:40 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:40.807948 1938 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxtsc\" (UniqueName: \"kubernetes.io/projected/0fd0efca-25d3-42b8-b210-f9f1dd5821bd-kube-api-access-dxtsc\") pod \"netcat-668db85669-lcmt9\" (UID: \"0fd0efca-25d3-42b8-b210-f9f1dd5821bd\") " pod="default/netcat-668db85669-lcmt9" Feb 21 09:03:41 kindnet-20220221084934-6550 kubelet[1938]: I0221 09:03:41.264386 1938 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="c3ef6cecbc94f88f4f4ba2852ddd55bb38a48d6eba24c50cc663a7059acb1abb" * * ==> storage-provisioner [4a4b744690f2] <== * I0221 09:03:32.483323 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... I0221 09:03:32.511147 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service! I0221 09:03:32.511218 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I0221 09:03:32.532448 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath I0221 09:03:32.532609 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b1caf4c3-f6ca-4315-b5f2-ad23ee3af26a", APIVersion:"v1", ResourceVersion:"483", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kindnet-20220221084934-6550_b8beaf6e-41e9-47e8-8fb7-ee09cb02d620 became leader I0221 09:03:32.532621 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kindnet-20220221084934-6550_b8beaf6e-41e9-47e8-8fb7-ee09cb02d620! I0221 09:03:32.632930 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kindnet-20220221084934-6550_b8beaf6e-41e9-47e8-8fb7-ee09cb02d620! -- /stdout -- helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p kindnet-20220221084934-6550 -n kindnet-20220221084934-6550 helpers_test.go:262: (dbg) Run: kubectl --context kindnet-20220221084934-6550 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running helpers_test.go:271: non-running pods: helpers_test.go:273: ======> post-mortem[TestNetworkPlugins/group/kindnet]: describe non-running pods <====== helpers_test.go:276: (dbg) Run: kubectl --context kindnet-20220221084934-6550 describe pod helpers_test.go:276: (dbg) Non-zero exit: kubectl --context kindnet-20220221084934-6550 describe pod : exit status 1 (38.839765ms) ** stderr ** error: resource name may not be empty ** /stderr ** helpers_test.go:278: kubectl --context kindnet-20220221084934-6550 describe pod : exit status 1 helpers_test.go:176: Cleaning up "kindnet-20220221084934-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p kindnet-20220221084934-6550 helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kindnet-20220221084934-6550: (2.714574033s) --- FAIL: TestNetworkPlugins/group/kindnet (422.20s) === FAIL: . TestNetworkPlugins/group/bridge/DNS (281.38s) net_test.go:163: (dbg) Run: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.160193644s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.127958606s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.146887863s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.130719111s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.15894252s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.127007778s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.13905095s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default E0221 09:11:16.369953 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.132482174s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default E0221 09:11:46.065538 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory E0221 09:11:46.071578 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory E0221 09:11:46.082474 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory E0221 09:11:46.103250 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory E0221 09:11:46.144057 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory E0221 09:11:46.225233 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory E0221 09:11:46.386034 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory E0221 09:11:46.706601 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory E0221 09:11:47.347094 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory E0221 09:11:48.628104 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory E0221 09:11:51.188585 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory E0221 09:11:56.308747 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.138079671s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.125001889s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 09:12:30.568932 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory E0221 09:12:36.193616 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory net_test.go:163: (dbg) Run: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default E0221 09:13:29.174402 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.143988318s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1 net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"* --- FAIL: TestNetworkPlugins/group/bridge/DNS (281.38s) === FAIL: . TestNetworkPlugins/group/bridge (588.30s) net_test.go:198: "bridge" test finished in 24m0.75141613s, failed=true net_test.go:199: *** TestNetworkPlugins/group/bridge FAILED at 2022-02-21 09:13:34.512960034 +0000 UTC m=+2907.275279626 helpers_test.go:223: -----------------------post-mortem-------------------------------- helpers_test.go:231: ======> post-mortem[TestNetworkPlugins/group/bridge]: docker inspect <====== helpers_test.go:232: (dbg) Run: docker inspect bridge-20220221084933-6550 helpers_test.go:236: (dbg) docker inspect bridge-20220221084933-6550: -- stdout -- [ { "Id": "92f2512247c48e2a4f4c4fa198db26fe6a0ededdc8b3b0cdcddc74eb7d584f79", "Created": "2022-02-21T09:04:01.183512299Z", "Path": "/usr/local/bin/entrypoint", "Args": [ "/sbin/init" ], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 452177, "ExitCode": 0, "Error": "", "StartedAt": "2022-02-21T09:04:01.608435405Z", "FinishedAt": "0001-01-01T00:00:00Z" }, "Image": "sha256:1312ccd2422d964b2df363d606d0c016d6acbc1ddf0211c26a74717f2897dc43", "ResolvConfPath": "/var/lib/docker/containers/92f2512247c48e2a4f4c4fa198db26fe6a0ededdc8b3b0cdcddc74eb7d584f79/resolv.conf", "HostnamePath": "/var/lib/docker/containers/92f2512247c48e2a4f4c4fa198db26fe6a0ededdc8b3b0cdcddc74eb7d584f79/hostname", "HostsPath": "/var/lib/docker/containers/92f2512247c48e2a4f4c4fa198db26fe6a0ededdc8b3b0cdcddc74eb7d584f79/hosts", "LogPath": "/var/lib/docker/containers/92f2512247c48e2a4f4c4fa198db26fe6a0ededdc8b3b0cdcddc74eb7d584f79/92f2512247c48e2a4f4c4fa198db26fe6a0ededdc8b3b0cdcddc74eb7d584f79-json.log", "Name": "/bridge-20220221084933-6550", "RestartCount": 0, "Driver": "overlay2", "Platform": "linux", "MountLabel": "", "ProcessLabel": "", "AppArmorProfile": "unconfined", "ExecIDs": null, "HostConfig": { "Binds": [ "/lib/modules:/lib/modules:ro", "bridge-20220221084933-6550:/var" ], "ContainerIDFile": "", "LogConfig": { "Type": "json-file", "Config": {} }, "NetworkMode": "bridge-20220221084933-6550", "PortBindings": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ] }, "RestartPolicy": { "Name": "no", "MaximumRetryCount": 0 }, "AutoRemove": false, "VolumeDriver": "", "VolumesFrom": null, "CapAdd": null, "CapDrop": null, "CgroupnsMode": "host", "Dns": [], "DnsOptions": [], "DnsSearch": [], "ExtraHosts": null, "GroupAdd": null, "IpcMode": "private", "Cgroup": "", "Links": null, "OomScoreAdj": 0, "PidMode": "", "Privileged": true, "PublishAllPorts": false, "ReadonlyRootfs": false, "SecurityOpt": [ "seccomp=unconfined", "apparmor=unconfined", "label=disable" ], "Tmpfs": { "/run": "", "/tmp": "" }, "UTSMode": "", "UsernsMode": "", "ShmSize": 67108864, "Runtime": "runc", "ConsoleSize": [ 0, 0 ], "Isolation": "", "CpuShares": 0, "Memory": 0, "NanoCpus": 2000000000, "CgroupParent": "", "BlkioWeight": 0, "BlkioWeightDevice": [], "BlkioDeviceReadBps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteIOps": null, "CpuPeriod": 0, "CpuQuota": 0, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": [ { "PathOnHost": "/dev/fuse", "PathInContainer": "/dev/fuse", "CgroupPermissions": "rwm" } ], "DeviceCgroupRules": null, "DeviceRequests": null, "KernelMemory": 0, "KernelMemoryTCP": 0, "MemoryReservation": 0, "MemorySwap": 0, "MemorySwappiness": null, "OomKillDisable": false, "PidsLimit": null, "Ulimits": null, "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0, "MaskedPaths": null, "ReadonlyPaths": null }, "GraphDriver": { "Data": { "LowerDir": "/var/lib/docker/overlay2/ed5a45fcc74e2dd89241db3e86709ba8d8411989364257cc31812097e249070a-init/diff:/var/lib/docker/overlay2/a149cc5f42de5038ba3e5f21bd82a6356aa9456aec11a3690e569db72ca8eaf5/diff:/var/lib/docker/overlay2/fce4ef34d92ff253276822af091ec16578b924486e0e565c7f34b479bdbe1606/diff:/var/lib/docker/overlay2/a38e10ed288ad73a8ce637c6219d5b13318bb72210ce4fb6291dd57d238df611/diff:/var/lib/docker/overlay2/93c18ddce49a4eb2d344b0579d7f58df999d53bb329042f78a5b3f40296f83cd/diff:/var/lib/docker/overlay2/d9ee14262b31fff87924cf913d835b02182e6a8dc3783ecb5f67dbd29faa079c/diff:/var/lib/docker/overlay2/8a0b39fc5ebcb9761ad433102244050fab339d3e027a5a30a7e01d121fe362e4/diff:/var/lib/docker/overlay2/7a6b41deef37b6400fac0c94871f73040d4cae157de2e929b7a0bf96f0e353aa/diff:/var/lib/docker/overlay2/49d913ee587718efcf521a65de0109df5c2c42644f75690beed97a437d623b66/diff:/var/lib/docker/overlay2/89e8fa960313dfb6ece4007576bf5d89e2b0ea68a2bde9de5527647a7df6ff0f/diff:/var/lib/docker/overlay2/0d2632e6ee2f4198f3c7c0d37c31ee03d33864e0131ae85dd3001adecfae53db/diff:/var/lib/docker/overlay2/b28a3b2bbe2e231d00863aaaf6be28d2e5a76197f2be7934f18720a71f2435a8/diff:/var/lib/docker/overlay2/6f88e83ff5bb26bf909248d8df5eacf4d01f72523927fd7433a925c525fbd274/diff:/var/lib/docker/overlay2/c2547d18d2639fa842b0977952fa69cffd7b076e8db357ef0ec62fae9676619b/diff:/var/lib/docker/overlay2/34465ba37bb8eef87906c38cb891a738c54a16dbf47d27fef95cee96f0b62c4f/diff:/var/lib/docker/overlay2/50d3383518bb8e6a01e1cfcae621b8c2238dc54ae94d63606d145633045bc6cd/diff:/var/lib/docker/overlay2/83561f107aed2bf9cb840ff66aceb882d0e39bcced79ef27ce67c17e063ba536/diff:/var/lib/docker/overlay2/bfb7c0c91e0a7649286abe3a6b02df4585a931ffa3a6b1e4846aa1b98eb0d997/diff:/var/lib/docker/overlay2/80589044da151f3d3eed7824d236bc8a19697182abca8610eb34f1c3cae53bf0/diff:/var/lib/docker/overlay2/5a336e956e467ad50830aa4b9ed5b65cb3ef08e6d49a4900acd0440ca3c03c3a/diff:/var/lib/docker/overlay2/f7b3446c0dca80a2053183e360de8f99e3a31e87ad633fcc23f14b2045643596/diff:/var/lib/docker/overlay2/03688a16272f13f172ed8baa181df2a1702e08edbc9c5a4a767265adb2c00b79/diff:/var/lib/docker/overlay2/c5445afaabab58221dbffedde261b3f9948116ccba1344b6f3dd4fdfaf5c98cd/diff:/var/lib/docker/overlay2/704464fd20f23f93d0ad3c3c58103f01fc8a0d76d70bc191f718273e7412bb42/diff:/var/lib/docker/overlay2/a344f9ca1bf1b674ebf560c2a0498913ed90c836e3b4f5b1968022f7f7e54055/diff:/var/lib/docker/overlay2/929f053e6a327a7203e8d8c281f827d02ad922fdabbdfcc5453003daba88cdf4/diff:/var/lib/docker/overlay2/d6a160f0ffe94e5abf99d928a638f6a6220b24c7ba643a438c7557146d240f70/diff:/var/lib/docker/overlay2/c6b58e187a2df10fd55e19e6ad38c20f5ebf5e62617a59b6ce4afe0fc712c0f6/diff:/var/lib/docker/overlay2/fef43deeff878af42cc1f31a1f265b66e642aa166cec02eba5a0c82fb6d70872/diff:/var/lib/docker/overlay2/0d7bbf19182056a05efca89485df54da5fba9903000517fc9e7c2072aa7ba62e/diff:/var/lib/docker/overlay2/5840748964e9ce3890a2fa4fc2e3fb0635426f944b42b3128808a7aca816bf48/diff:/var/lib/docker/overlay2/a94126962e9f346960e3f570b96e0a6e987d255fcba30bf3fd76b227e66275c6/diff:/var/lib/docker/overlay2/211ba88e1557a352838b4896dff8efac70330c1424cff237613c4186994ec037/diff:/var/lib/docker/overlay2/c04720d23411d6693832990e2c79bc529ade609401eaeee8d159f5a6d74d986e/diff:/var/lib/docker/overlay2/70bfe328c89d3a3902a4ba591874fe936425b208ec8e39c2e25d60f21b212300/diff:/var/lib/docker/overlay2/7dcce8bec6dd9561387116c9f724b9101e94e15c0e19c7880ce2b4e96b2921a6/diff:/var/lib/docker/overlay2/81463813805e91be42d9f71b5c63823a80abaaa42d8638034eaddcd66c148310/diff:/var/lib/docker/overlay2/c7dbb1344cf7167b07d58d49902e0db9335bc92a71a324c15dadc14c0aa67c45/diff:/var/lib/docker/overlay2/909df0d6b92b92bcf6e376f0e33d120c53cd24ad0b997d31b5a84f3d1ab89769/diff:/var/lib/docker/overlay2/eacfeaee6ed4aa14e847adad2c6d03439c53944654cc2779cc5c80d239ab323c/diff:/var/lib/docker/overlay2/48c58492169f9043495b11b47439f4dcdeb168525e5bb3a8c130c1ddd5ec882d/diff:/var/lib/docker/overlay2/4520a69f91e766b7694473c2107cc0644394d65da900259eb5933779accdc86d/diff:/var/lib/docker/overlay2/68aafdd300c4b4ebf94976a352959e75fa170653a69e13da38650a447cd5cb2f/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394cfc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff", "MergedDir": "/var/lib/docker/overlay2/ed5a45fcc74e2dd89241db3e86709ba8d8411989364257cc31812097e249070a/merged", "UpperDir": "/var/lib/docker/overlay2/ed5a45fcc74e2dd89241db3e86709ba8d8411989364257cc31812097e249070a/diff", "WorkDir": "/var/lib/docker/overlay2/ed5a45fcc74e2dd89241db3e86709ba8d8411989364257cc31812097e249070a/work" }, "Name": "overlay2" }, "Mounts": [ { "Type": "bind", "Source": "/lib/modules", "Destination": "/lib/modules", "Mode": "ro", "RW": false, "Propagation": "rprivate" }, { "Type": "volume", "Name": "bridge-20220221084933-6550", "Source": "/var/lib/docker/volumes/bridge-20220221084933-6550/_data", "Destination": "/var", "Driver": "local", "Mode": "z", "RW": true, "Propagation": "" } ], "Config": { "Hostname": "bridge-20220221084933-6550", "Domainname": "", "User": "root", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "ExposedPorts": { "22/tcp": {}, "2376/tcp": {}, "32443/tcp": {}, "5000/tcp": {}, "8443/tcp": {} }, "Tty": true, "OpenStdin": false, "StdinOnce": false, "Env": [ "container=docker", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" ], "Cmd": null, "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2", "Volumes": null, "WorkingDir": "", "Entrypoint": [ "/usr/local/bin/entrypoint", "/sbin/init" ], "OnBuild": null, "Labels": { "created_by.minikube.sigs.k8s.io": "true", "mode.minikube.sigs.k8s.io": "bridge-20220221084933-6550", "name.minikube.sigs.k8s.io": "bridge-20220221084933-6550", "role.minikube.sigs.k8s.io": "" }, "StopSignal": "SIGRTMIN+3" }, "NetworkSettings": { "Bridge": "", "SandboxID": "0f4bfabff1b3d095a573c55ed3b3202d1cf91495e39f99183c5a4ec4ee6861c4", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49394" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49393" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49390" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49392" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49391" } ] }, "SandboxKey": "/var/run/docker/netns/0f4bfabff1b3", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null, "EndpointID": "", "Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "MacAddress": "", "Networks": { "bridge-20220221084933-6550": { "IPAMConfig": { "IPv4Address": "192.168.67.2" }, "Links": null, "Aliases": [ "92f2512247c4", "bridge-20220221084933-6550" ], "NetworkID": "0c80bded97cfa73ce5c331c3eb3fb63b7ea93362767e43bd30c1be5861caa896", "EndpointID": "44fb9677e35dc60cc44ff4015b2a55e27655ad27e76bab29c355b60245b43a65", "Gateway": "192.168.67.1", "IPAddress": "192.168.67.2", "IPPrefixLen": 24, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:c0:a8:43:02", "DriverOpts": null } } } } ] -- /stdout -- helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p bridge-20220221084933-6550 -n bridge-20220221084933-6550 helpers_test.go:245: <<< TestNetworkPlugins/group/bridge FAILED: start of post-mortem logs <<< helpers_test.go:246: ======> post-mortem[TestNetworkPlugins/group/bridge]: minikube logs <====== helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 -p bridge-20220221084933-6550 logs -n 25 E0221 09:13:35.029707 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory E0221 09:13:35.035270 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory E0221 09:13:35.045514 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory E0221 09:13:35.066322 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory E0221 09:13:35.106616 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory E0221 09:13:35.186973 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory E0221 09:13:35.347422 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory E0221 09:13:35.668057 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p bridge-20220221084933-6550 logs -n 25: (1.118026474s) helpers_test.go:253: TestNetworkPlugins/group/bridge logs: -- stdout -- * * ==> Audit <== * |---------|---------------------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|---------------------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| | ssh | -p false-20220221084934-6550 | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:56:15 UTC | Mon, 21 Feb 2022 08:56:16 UTC | | | pgrep -a kubelet | | | | | | | start | -p auto-20220221084933-6550 | auto-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 08:53:29 UTC | Mon, 21 Feb 2022 09:01:45 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p auto-20220221084933-6550 | auto-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:01:45 UTC | Mon, 21 Feb 2022 09:01:45 UTC | | | pgrep -a kubelet | | | | | | | -p | false-20220221084934-6550 logs | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:02:41 UTC | Mon, 21 Feb 2022 09:02:42 UTC | | | -n 25 | | | | | | | delete | -p false-20220221084934-6550 | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:02:43 UTC | Mon, 21 Feb 2022 09:02:46 UTC | | -p | custom-weave-20220221084934-6550 | custom-weave-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:27 UTC | Mon, 21 Feb 2022 09:03:28 UTC | | | logs -n 25 | | | | | | | delete | -p | custom-weave-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:29 UTC | Mon, 21 Feb 2022 09:03:32 UTC | | | custom-weave-20220221084934-6550 | | | | | | | start | -p kindnet-20220221084934-6550 | kindnet-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:02:46 UTC | Mon, 21 Feb 2022 09:03:35 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=kindnet --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p kindnet-20220221084934-6550 | kindnet-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:40 UTC | Mon, 21 Feb 2022 09:03:40 UTC | | | pgrep -a kubelet | | | | | | | -p | calico-20220221084934-6550 | calico-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:45 UTC | Mon, 21 Feb 2022 09:03:47 UTC | | | logs -n 25 | | | | | | | delete | -p calico-20220221084934-6550 | calico-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:48 UTC | Mon, 21 Feb 2022 09:03:50 UTC | | -p | auto-20220221084933-6550 logs | auto-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:07:20 UTC | Mon, 21 Feb 2022 09:07:22 UTC | | | -n 25 | | | | | | | delete | -p auto-20220221084933-6550 | auto-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:07:22 UTC | Mon, 21 Feb 2022 09:07:25 UTC | | start | -p | enable-default-cni-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:32 UTC | Mon, 21 Feb 2022 09:08:26 UTC | | | enable-default-cni-20220221084933-6550 | | | | | | | | --memory=2048 --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --enable-default-cni=true | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p | enable-default-cni-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:08:26 UTC | Mon, 21 Feb 2022 09:08:27 UTC | | | enable-default-cni-20220221084933-6550 | | | | | | | | pgrep -a kubelet | | | | | | | start | -p bridge-20220221084933-6550 | bridge-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:51 UTC | Mon, 21 Feb 2022 09:08:41 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=bridge --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p bridge-20220221084933-6550 | bridge-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:08:41 UTC | Mon, 21 Feb 2022 09:08:41 UTC | | | pgrep -a kubelet | | | | | | | -p | kindnet-20220221084934-6550 | kindnet-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:09:44 UTC | Mon, 21 Feb 2022 09:09:45 UTC | | | logs -n 25 | | | | | | | delete | -p kindnet-20220221084934-6550 | kindnet-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:09:45 UTC | Mon, 21 Feb 2022 09:09:48 UTC | | start | -p | old-k8s-version-20220221090948-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:09:48 UTC | Mon, 21 Feb 2022 09:11:57 UTC | | | old-k8s-version-20220221090948-6550 | | | | | | | | --memory=2200 --alsologtostderr | | | | | | | | --wait=true --kvm-network=default | | | | | | | | --kvm-qemu-uri=qemu:///system | | | | | | | | --disable-driver-mounts | | | | | | | | --keep-context=false | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | | --kubernetes-version=v1.16.0 | | | | | | | addons | enable metrics-server -p | old-k8s-version-20220221090948-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:12:06 UTC | Mon, 21 Feb 2022 09:12:06 UTC | | | old-k8s-version-20220221090948-6550 | | | | | | | | --images=MetricsServer=k8s.gcr.io/echoserver:1.4 | | | | | | | | --registries=MetricsServer=fake.domain | | | | | | | start | -p kubenet-20220221084933-6550 | kubenet-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:07:25 UTC | Mon, 21 Feb 2022 09:12:15 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --network-plugin=kubenet | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p kubenet-20220221084933-6550 | kubenet-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:12:15 UTC | Mon, 21 Feb 2022 09:12:15 UTC | | | pgrep -a kubelet | | | | | | | stop | -p | old-k8s-version-20220221090948-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:12:06 UTC | Mon, 21 Feb 2022 09:12:17 UTC | | | old-k8s-version-20220221090948-6550 | | | | | | | | --alsologtostderr -v=3 | | | | | | | addons | enable dashboard -p | old-k8s-version-20220221090948-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:12:17 UTC | Mon, 21 Feb 2022 09:12:17 UTC | | | old-k8s-version-20220221090948-6550 | | | | | | | | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 | | | | | | |---------|---------------------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| * * ==> Last Start <== * Log file created at: 2022/02/21 09:12:18 Running on machine: ubuntu-20-agent-5 Binary: Built with gc go1.17.7 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0221 09:12:18.063563 481686 out.go:297] Setting OutFile to fd 1 ... I0221 09:12:18.063667 481686 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 09:12:18.063680 481686 out.go:310] Setting ErrFile to fd 2... I0221 09:12:18.063686 481686 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 09:12:18.063879 481686 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/bin I0221 09:12:18.064401 481686 out.go:304] Setting JSON to false I0221 09:12:18.066180 481686 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":3292,"bootTime":1645431446,"procs":471,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"} I0221 09:12:18.066283 481686 start.go:122] virtualization: kvm guest I0221 09:12:18.069062 481686 out.go:176] * [old-k8s-version-20220221090948-6550] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64) I0221 09:12:18.070941 481686 out.go:176] - MINIKUBE_LOCATION=13641 I0221 09:12:18.069225 481686 notify.go:193] Checking for updates... I0221 09:12:18.072550 481686 out.go:176] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true I0221 09:12:18.074232 481686 out.go:176] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 09:12:18.075722 481686 out.go:176] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube I0221 09:12:18.077236 481686 out.go:176] - MINIKUBE_BIN=out/minikube-linux-amd64 I0221 09:12:18.077830 481686 config.go:176] Loaded profile config "old-k8s-version-20220221090948-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0 I0221 09:12:18.079929 481686 out.go:176] * Kubernetes 1.23.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.4 I0221 09:12:18.079966 481686 driver.go:344] Setting default libvirt URI to qemu:///system I0221 09:12:18.132076 481686 docker.go:132] docker version: linux-20.10.12 I0221 09:12:18.132199 481686 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 09:12:18.243268 481686 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:49 SystemTime:2022-02-21 09:12:18.170280502 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 09:12:18.243371 481686 docker.go:237] overlay module found I0221 09:12:18.245988 481686 out.go:176] * Using the docker driver based on existing profile I0221 09:12:18.246020 481686 start.go:281] selected driver: docker I0221 09:12:18.246026 481686 start.go:798] validating driver "docker" against &{Name:old-k8s-version-20220221090948-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220221090948-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 09:12:18.246140 481686 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} W0221 09:12:18.246188 481686 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. W0221 09:12:18.246211 481686 out.go:241] ! Your cgroup does not allow setting memory. I0221 09:12:18.247617 481686 out.go:176] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities I0221 09:12:18.248269 481686 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 09:12:18.365356 481686 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:49 SystemTime:2022-02-21 09:12:18.283585104 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} W0221 09:12:18.365474 481686 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. W0221 09:12:18.365497 481686 out.go:241] ! Your cgroup does not allow setting memory. I0221 09:12:18.369348 481686 out.go:176] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities I0221 09:12:18.369446 481686 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] I0221 09:12:18.369488 481686 cni.go:93] Creating CNI manager for "" I0221 09:12:18.369505 481686 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0221 09:12:18.369519 481686 start_flags.go:302] config: {Name:old-k8s-version-20220221090948-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220221090948-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 09:12:18.371447 481686 out.go:176] * Starting control plane node old-k8s-version-20220221090948-6550 in cluster old-k8s-version-20220221090948-6550 I0221 09:12:18.371481 481686 cache.go:120] Beginning downloading kic base image for docker with docker I0221 09:12:18.372838 481686 out.go:176] * Pulling base image ... I0221 09:12:18.372866 481686 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker I0221 09:12:18.372899 481686 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4 I0221 09:12:18.372907 481686 cache.go:57] Caching tarball of preloaded images I0221 09:12:18.372961 481686 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon I0221 09:12:18.373221 481686 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0221 09:12:18.373240 481686 cache.go:60] Finished verifying existence of preloaded tar for v1.16.0 on docker I0221 09:12:18.373359 481686 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/config.json ... I0221 09:12:18.437016 481686 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon, skipping pull I0221 09:12:18.437050 481686 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 exists in daemon, skipping load I0221 09:12:18.437063 481686 cache.go:208] Successfully downloaded all kic artifacts I0221 09:12:18.437096 481686 start.go:313] acquiring machines lock for old-k8s-version-20220221090948-6550: {Name:mkc2c1cda1482e6b6fedc7dd454394ebc20d0304 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:12:18.437205 481686 start.go:317] acquired machines lock for "old-k8s-version-20220221090948-6550" in 82.821µs I0221 09:12:18.437229 481686 start.go:93] Skipping create...Using existing machine configuration I0221 09:12:18.437236 481686 fix.go:55] fixHost starting: I0221 09:12:18.437532 481686 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220221090948-6550 --format={{.State.Status}} I0221 09:12:18.474067 481686 fix.go:108] recreateIfNeeded on old-k8s-version-20220221090948-6550: state=Stopped err= W0221 09:12:18.474097 481686 fix.go:134] unexpected machine state, will restart: I0221 09:12:18.477131 481686 out.go:176] * Restarting existing docker container for "old-k8s-version-20220221090948-6550" ... I0221 09:12:18.477189 481686 cli_runner.go:133] Run: docker start old-k8s-version-20220221090948-6550 I0221 09:12:18.916066 481686 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220221090948-6550 --format={{.State.Status}} I0221 09:12:18.958039 481686 kic.go:420] container "old-k8s-version-20220221090948-6550" state is running. I0221 09:12:18.958636 481686 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220221090948-6550 I0221 09:12:18.997177 481686 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/config.json ... I0221 09:12:18.997400 481686 machine.go:88] provisioning docker machine ... I0221 09:12:18.997429 481686 ubuntu.go:169] provisioning hostname "old-k8s-version-20220221090948-6550" I0221 09:12:18.997463 481686 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220221090948-6550 I0221 09:12:19.046081 481686 main.go:130] libmachine: Using SSH client type: native I0221 09:12:19.046324 481686 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49409 } I0221 09:12:19.046353 481686 main.go:130] libmachine: About to run SSH command: sudo hostname old-k8s-version-20220221090948-6550 && echo "old-k8s-version-20220221090948-6550" | sudo tee /etc/hostname I0221 09:12:19.047041 481686 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45212->127.0.0.1:49409: read: connection reset by peer I0221 09:12:22.179930 481686 main.go:130] libmachine: SSH cmd err, output: : old-k8s-version-20220221090948-6550 I0221 09:12:22.180015 481686 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220221090948-6550 I0221 09:12:22.214794 481686 main.go:130] libmachine: Using SSH client type: native I0221 09:12:22.214943 481686 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49409 } I0221 09:12:22.214963 481686 main.go:130] libmachine: About to run SSH command: if ! grep -xq '.*\sold-k8s-version-20220221090948-6550' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220221090948-6550/g' /etc/hosts; else echo '127.0.1.1 old-k8s-version-20220221090948-6550' | sudo tee -a /etc/hosts; fi fi I0221 09:12:22.338910 481686 main.go:130] libmachine: SSH cmd err, output: : I0221 09:12:22.338956 481686 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube} I0221 09:12:22.339028 481686 ubuntu.go:177] setting up certificates I0221 09:12:22.339043 481686 provision.go:83] configureAuth start I0221 09:12:22.339106 481686 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220221090948-6550 I0221 09:12:22.372440 481686 provision.go:138] copyHostCerts I0221 09:12:22.372507 481686 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem, removing ... I0221 09:12:22.372520 481686 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem I0221 09:12:22.372590 481686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem (1082 bytes) I0221 09:12:22.372706 481686 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem, removing ... I0221 09:12:22.372722 481686 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem I0221 09:12:22.372750 481686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem (1123 bytes) I0221 09:12:22.372831 481686 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem, removing ... I0221 09:12:22.372844 481686 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem I0221 09:12:22.372873 481686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem (1675 bytes) I0221 09:12:22.372945 481686 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220221090948-6550 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220221090948-6550] I0221 09:12:22.657456 481686 provision.go:172] copyRemoteCerts I0221 09:12:22.657524 481686 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0221 09:12:22.657556 481686 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220221090948-6550 I0221 09:12:22.691986 481686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49409 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/old-k8s-version-20220221090948-6550/id_rsa Username:docker} I0221 09:12:22.778536 481686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes) I0221 09:12:22.796303 481686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0221 09:12:22.813782 481686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes) I0221 09:12:22.831458 481686 provision.go:86] duration metric: configureAuth took 492.398552ms I0221 09:12:22.831489 481686 ubuntu.go:193] setting minikube options for container-runtime I0221 09:12:22.831672 481686 config.go:176] Loaded profile config "old-k8s-version-20220221090948-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0 I0221 09:12:22.831714 481686 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220221090948-6550 I0221 09:12:22.865147 481686 main.go:130] libmachine: Using SSH client type: native I0221 09:12:22.865310 481686 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49409 } I0221 09:12:22.865323 481686 main.go:130] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0221 09:12:22.987160 481686 main.go:130] libmachine: SSH cmd err, output: : overlay I0221 09:12:22.987181 481686 ubuntu.go:71] root file system type: overlay I0221 09:12:22.987381 481686 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0221 09:12:22.987440 481686 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220221090948-6550 I0221 09:12:23.022112 481686 main.go:130] libmachine: Using SSH client type: native I0221 09:12:23.022272 481686 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49409 } I0221 09:12:23.022370 481686 main.go:130] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0221 09:12:23.151950 481686 main.go:130] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0221 09:12:23.152020 481686 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220221090948-6550 I0221 09:12:23.186215 481686 main.go:130] libmachine: Using SSH client type: native I0221 09:12:23.186462 481686 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49409 } I0221 09:12:23.186483 481686 main.go:130] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0221 09:12:23.311126 481686 main.go:130] libmachine: SSH cmd err, output: : I0221 09:12:23.311159 481686 machine.go:91] provisioned docker machine in 4.313744156s I0221 09:12:23.311168 481686 start.go:267] post-start starting for "old-k8s-version-20220221090948-6550" (driver="docker") I0221 09:12:23.311173 481686 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0221 09:12:23.311226 481686 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0221 09:12:23.311263 481686 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220221090948-6550 I0221 09:12:23.345260 481686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49409 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/old-k8s-version-20220221090948-6550/id_rsa Username:docker} I0221 09:12:23.435066 481686 ssh_runner.go:195] Run: cat /etc/os-release I0221 09:12:23.438889 481686 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0221 09:12:23.438921 481686 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0221 09:12:23.438935 481686 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0221 09:12:23.438941 481686 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0221 09:12:23.438952 481686 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/addons for local assets ... I0221 09:12:23.439058 481686 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files for local assets ... I0221 09:12:23.439165 481686 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem -> 65502.pem in /etc/ssl/certs I0221 09:12:23.439290 481686 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs I0221 09:12:23.446645 481686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /etc/ssl/certs/65502.pem (1708 bytes) I0221 09:12:23.464506 481686 start.go:270] post-start completed in 153.326516ms I0221 09:12:23.464566 481686 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0221 09:12:23.464602 481686 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220221090948-6550 I0221 09:12:23.498324 481686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49409 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/old-k8s-version-20220221090948-6550/id_rsa Username:docker} I0221 09:12:23.587416 481686 fix.go:57] fixHost completed within 5.15017508s I0221 09:12:23.587444 481686 start.go:80] releasing machines lock for "old-k8s-version-20220221090948-6550", held for 5.15022486s I0221 09:12:23.587526 481686 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220221090948-6550 I0221 09:12:23.621246 481686 ssh_runner.go:195] Run: systemctl --version I0221 09:12:23.621295 481686 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/ I0221 09:12:23.621306 481686 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220221090948-6550 I0221 09:12:23.621335 481686 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220221090948-6550 I0221 09:12:23.658227 481686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49409 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/old-k8s-version-20220221090948-6550/id_rsa Username:docker} I0221 09:12:23.660144 481686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49409 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/old-k8s-version-20220221090948-6550/id_rsa Username:docker} I0221 09:12:23.890444 481686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0221 09:12:23.902665 481686 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 09:12:23.912077 481686 cruntime.go:272] skipping containerd shutdown because we are bound to it I0221 09:12:23.912502 481686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0221 09:12:23.922862 481686 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0221 09:12:23.935994 481686 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0221 09:12:24.015940 481686 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0221 09:12:24.095712 481686 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 09:12:24.106277 481686 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0221 09:12:24.184533 481686 ssh_runner.go:195] Run: sudo systemctl start docker I0221 09:12:24.194410 481686 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 09:12:24.235854 481686 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 09:12:24.278883 481686 out.go:203] * Preparing Kubernetes v1.16.0 on Docker 20.10.12 ... I0221 09:12:24.278954 481686 cli_runner.go:133] Run: docker network inspect old-k8s-version-20220221090948-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 09:12:24.313540 481686 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts I0221 09:12:24.317078 481686 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 09:12:24.328491 481686 out.go:176] - kubelet.housekeeping-interval=5m I0221 09:12:24.328555 481686 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker I0221 09:12:24.328601 481686 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0221 09:12:24.364128 481686 docker.go:606] Got preloaded images: -- stdout -- kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 k8s.gcr.io/kube-apiserver:v1.16.0 k8s.gcr.io/kube-controller-manager:v1.16.0 k8s.gcr.io/kube-proxy:v1.16.0 k8s.gcr.io/kube-scheduler:v1.16.0 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2 gcr.io/k8s-minikube/busybox:1.28.4-glibc k8s.gcr.io/pause:3.1 -- /stdout -- I0221 09:12:24.364151 481686 docker.go:537] Images already preloaded, skipping extraction I0221 09:12:24.364203 481686 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0221 09:12:24.399461 481686 docker.go:606] Got preloaded images: -- stdout -- kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 k8s.gcr.io/kube-apiserver:v1.16.0 k8s.gcr.io/kube-proxy:v1.16.0 k8s.gcr.io/kube-controller-manager:v1.16.0 k8s.gcr.io/kube-scheduler:v1.16.0 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2 gcr.io/k8s-minikube/busybox:1.28.4-glibc k8s.gcr.io/pause:3.1 -- /stdout -- I0221 09:12:24.399490 481686 cache_images.go:84] Images are preloaded, skipping loading I0221 09:12:24.399541 481686 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0221 09:12:24.486001 481686 cni.go:93] Creating CNI manager for "" I0221 09:12:24.486035 481686 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0221 09:12:24.486052 481686 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0221 09:12:24.486070 481686 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220221090948-6550 NodeName:old-k8s-version-20220221090948-6550 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0221 09:12:24.486248 481686 kubeadm.go:162] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta1 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "old-k8s-version-20220221090948-6550" kubeletExtraArgs: node-ip: 192.168.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: old-k8s-version-20220221090948-6550 controlPlaneEndpoint: control-plane.minikube.internal:8443 dns: type: CoreDNS etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: listen-metrics-urls: http://127.0.0.1:2381,http://192.168.49.2:2381 kubernetesVersion: v1.16.0 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0221 09:12:24.486349 481686 kubeadm.go:936] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220221090948-6550 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 [Install] config: {KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220221090948-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0221 09:12:24.486406 481686 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0 I0221 09:12:24.493584 481686 binaries.go:44] Found k8s binaries, skipping transfer I0221 09:12:24.493638 481686 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0221 09:12:24.500464 481686 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes) I0221 09:12:24.514267 481686 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0221 09:12:24.527574 481686 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes) I0221 09:12:24.540712 481686 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0221 09:12:24.543820 481686 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 09:12:24.553284 481686 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550 for IP: 192.168.49.2 I0221 09:12:24.553402 481686 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key I0221 09:12:24.553455 481686 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key I0221 09:12:24.553547 481686 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.key I0221 09:12:24.553629 481686 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/apiserver.key.dd3b5fb2 I0221 09:12:24.553681 481686 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/proxy-client.key I0221 09:12:24.553795 481686 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem (1338 bytes) W0221 09:12:24.553832 481686 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550_empty.pem, impossibly tiny 0 bytes I0221 09:12:24.553848 481686 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem (1679 bytes) I0221 09:12:24.553887 481686 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem (1082 bytes) I0221 09:12:24.553918 481686 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem (1123 bytes) I0221 09:12:24.553962 481686 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem (1675 bytes) I0221 09:12:24.554056 481686 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem (1708 bytes) I0221 09:12:24.555294 481686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0221 09:12:24.573861 481686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0221 09:12:24.591640 481686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0221 09:12:24.609765 481686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I0221 09:12:24.628088 481686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0221 09:12:24.645704 481686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0221 09:12:24.663625 481686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0221 09:12:24.681704 481686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0221 09:12:24.699295 481686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /usr/share/ca-certificates/65502.pem (1708 bytes) I0221 09:12:24.716958 481686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0221 09:12:24.735157 481686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem --> /usr/share/ca-certificates/6550.pem (1338 bytes) I0221 09:12:24.753362 481686 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0221 09:12:24.766918 481686 ssh_runner.go:195] Run: openssl version I0221 09:12:24.772057 481686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65502.pem && ln -fs /usr/share/ca-certificates/65502.pem /etc/ssl/certs/65502.pem" I0221 09:12:24.780093 481686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65502.pem I0221 09:12:24.783295 481686 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Feb 21 08:30 /usr/share/ca-certificates/65502.pem I0221 09:12:24.783344 481686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65502.pem I0221 09:12:24.788424 481686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65502.pem /etc/ssl/certs/3ec20f2e.0" I0221 09:12:24.795451 481686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0221 09:12:24.803050 481686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0221 09:12:24.806096 481686 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb 21 08:26 /usr/share/ca-certificates/minikubeCA.pem I0221 09:12:24.806134 481686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0221 09:12:24.810891 481686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0221 09:12:24.817716 481686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6550.pem && ln -fs /usr/share/ca-certificates/6550.pem /etc/ssl/certs/6550.pem" I0221 09:12:24.825395 481686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6550.pem I0221 09:12:24.828413 481686 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Feb 21 08:30 /usr/share/ca-certificates/6550.pem I0221 09:12:24.828454 481686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6550.pem I0221 09:12:24.833530 481686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6550.pem /etc/ssl/certs/51391683.0" I0221 09:12:24.840555 481686 kubeadm.go:391] StartCluster: {Name:old-k8s-version-20220221090948-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220221090948-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 09:12:24.840674 481686 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0221 09:12:24.873394 481686 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0221 09:12:24.881295 481686 kubeadm.go:402] found existing configuration files, will attempt cluster restart I0221 09:12:24.881322 481686 kubeadm.go:601] restartCluster start I0221 09:12:24.881365 481686 ssh_runner.go:195] Run: sudo test -d /data/minikube I0221 09:12:24.888139 481686 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1 stdout: stderr: I0221 09:12:24.889073 481686 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20220221090948-6550" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 09:12:24.889498 481686 kubeconfig.go:127] "old-k8s-version-20220221090948-6550" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig - will repair! I0221 09:12:24.890203 481686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig: {Name:mkbaf7b3f4ffbe8a9d57707d423380523fa909e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:12:24.892424 481686 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new I0221 09:12:24.899475 481686 api_server.go:165] Checking apiserver status ... I0221 09:12:24.899523 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:12:24.913455 481686 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:12:25.113895 481686 api_server.go:165] Checking apiserver status ... I0221 09:12:25.113968 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:12:25.128519 481686 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:12:25.313555 481686 api_server.go:165] Checking apiserver status ... I0221 09:12:25.313624 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:12:25.328255 481686 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:12:25.514557 481686 api_server.go:165] Checking apiserver status ... I0221 09:12:25.514685 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:12:25.529308 481686 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:12:25.714548 481686 api_server.go:165] Checking apiserver status ... I0221 09:12:25.714633 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:12:25.729538 481686 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:12:25.913691 481686 api_server.go:165] Checking apiserver status ... I0221 09:12:25.913755 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:12:25.928505 481686 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:12:26.113748 481686 api_server.go:165] Checking apiserver status ... I0221 09:12:26.113812 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:12:26.128291 481686 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:12:26.314571 481686 api_server.go:165] Checking apiserver status ... I0221 09:12:26.314664 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:12:26.328927 481686 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:12:26.514257 481686 api_server.go:165] Checking apiserver status ... I0221 09:12:26.514328 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:12:26.529228 481686 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:12:26.714529 481686 api_server.go:165] Checking apiserver status ... I0221 09:12:26.714601 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:12:26.729366 481686 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:12:26.913603 481686 api_server.go:165] Checking apiserver status ... I0221 09:12:26.913680 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:12:26.928318 481686 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:12:27.114563 481686 api_server.go:165] Checking apiserver status ... I0221 09:12:27.114634 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:12:27.129974 481686 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:12:27.314287 481686 api_server.go:165] Checking apiserver status ... I0221 09:12:27.314379 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:12:27.328939 481686 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:12:27.514132 481686 api_server.go:165] Checking apiserver status ... I0221 09:12:27.514234 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:12:27.528968 481686 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:12:27.714191 481686 api_server.go:165] Checking apiserver status ... I0221 09:12:27.714255 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:12:27.728825 481686 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:12:27.914122 481686 api_server.go:165] Checking apiserver status ... I0221 09:12:27.914198 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:12:27.928679 481686 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:12:27.928702 481686 api_server.go:165] Checking apiserver status ... I0221 09:12:27.928734 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:12:27.942333 481686 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:12:27.942363 481686 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition I0221 09:12:27.942370 481686 kubeadm.go:1067] stopping kube-system containers ... I0221 09:12:27.942413 481686 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0221 09:12:27.977250 481686 docker.go:438] Stopping containers: [ad0e124a6147 6ac3974c8e9e 19594b2a5b28 fe46eea790da 494b0840ef1b 294e5c15540f 67385820bcc2 f10e557d91a5 d0ae540750ea 93c6a46109d3 5d114ac431ec 00310aa9fd81 d7e39eddf339 6f822b6e43e7] I0221 09:12:27.977321 481686 ssh_runner.go:195] Run: docker stop ad0e124a6147 6ac3974c8e9e 19594b2a5b28 fe46eea790da 494b0840ef1b 294e5c15540f 67385820bcc2 f10e557d91a5 d0ae540750ea 93c6a46109d3 5d114ac431ec 00310aa9fd81 d7e39eddf339 6f822b6e43e7 I0221 09:12:28.014554 481686 ssh_runner.go:195] Run: sudo systemctl stop kubelet I0221 09:12:28.024919 481686 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0221 09:12:28.032039 481686 kubeadm.go:155] found existing configuration files: -rw------- 1 root root 5747 Feb 21 09:10 /etc/kubernetes/admin.conf -rw------- 1 root root 5783 Feb 21 09:10 /etc/kubernetes/controller-manager.conf -rw------- 1 root root 5919 Feb 21 09:10 /etc/kubernetes/kubelet.conf -rw------- 1 root root 5731 Feb 21 09:10 /etc/kubernetes/scheduler.conf I0221 09:12:28.032102 481686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf I0221 09:12:28.038923 481686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf I0221 09:12:28.045850 481686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf I0221 09:12:28.052684 481686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf I0221 09:12:28.059412 481686 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0221 09:12:28.066289 481686 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml I0221 09:12:28.066315 481686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml" I0221 09:12:28.118534 481686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml" I0221 09:12:28.863240 481686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml" I0221 09:12:29.097233 481686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml" I0221 09:12:29.156501 481686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml" I0221 09:12:29.244710 481686 api_server.go:51] waiting for apiserver process to appear ... I0221 09:12:29.244765 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0221 09:12:29.760000 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0221 09:12:30.260268 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0221 09:12:30.759711 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0221 09:12:30.824845 481686 api_server.go:71] duration metric: took 1.580135004s to wait for apiserver process to appear ... I0221 09:12:30.824881 481686 api_server.go:87] waiting for apiserver healthz status ... I0221 09:12:30.824894 481686 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0221 09:12:35.260175 481686 api_server.go:266] https://192.168.49.2:8443/healthz returned 403: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} W0221 09:12:35.260248 481686 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 403: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} I0221 09:12:35.760929 481686 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0221 09:12:35.765652 481686 api_server.go:266] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld [-]poststarthook/ca-registration failed: reason withheld [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed W0221 09:12:35.765674 481686 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld [-]poststarthook/ca-registration failed: reason withheld [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed I0221 09:12:36.260914 481686 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0221 09:12:36.307534 481686 api_server.go:266] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld [+]poststarthook/ca-registration ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed W0221 09:12:36.307571 481686 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld [+]poststarthook/ca-registration ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed I0221 09:12:36.760752 481686 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0221 09:12:36.808563 481686 api_server.go:266] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/ca-registration ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed W0221 09:12:36.808654 481686 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/ca-registration ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed I0221 09:12:37.260827 481686 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0221 09:12:37.266069 481686 api_server.go:266] https://192.168.49.2:8443/healthz returned 200: ok I0221 09:12:37.272355 481686 api_server.go:140] control plane version: v1.16.0 I0221 09:12:37.272382 481686 api_server.go:130] duration metric: took 6.447494019s to wait for apiserver health ... I0221 09:12:37.272396 481686 cni.go:93] Creating CNI manager for "" I0221 09:12:37.272404 481686 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0221 09:12:37.272414 481686 system_pods.go:43] waiting for kube-system pods to appear ... I0221 09:12:37.282611 481686 system_pods.go:59] 7 kube-system pods found I0221 09:12:37.282652 481686 system_pods.go:61] "coredns-5644d7b6d9-4jfjr" [445faf1b-887e-484a-bb35-92f88222e76b] Running I0221 09:12:37.282658 481686 system_pods.go:61] "etcd-old-k8s-version-20220221090948-6550" [b3071ff1-0324-474f-ab77-8fd44e1ebc83] Running I0221 09:12:37.282662 481686 system_pods.go:61] "kube-apiserver-old-k8s-version-20220221090948-6550" [708fda44-6a97-49f1-95b0-7cc9c9d7ac36] Running I0221 09:12:37.282665 481686 system_pods.go:61] "kube-controller-manager-old-k8s-version-20220221090948-6550" [27071ca4-76ac-4233-8ab3-79113ba20d1f] Running I0221 09:12:37.282669 481686 system_pods.go:61] "kube-proxy-tdxwc" [486ca50e-8d88-462f-ab2b-90c0b323fee8] Running I0221 09:12:37.282674 481686 system_pods.go:61] "kube-scheduler-old-k8s-version-20220221090948-6550" [337538dd-9afc-4bc6-8bea-2b54c6104252] Running I0221 09:12:37.282677 481686 system_pods.go:61] "storage-provisioner" [acc16a62-19b6-4669-88e9-91a96f7d0f59] Running I0221 09:12:37.282682 481686 system_pods.go:74] duration metric: took 10.258953ms to wait for pod list to return data ... I0221 09:12:37.282691 481686 node_conditions.go:102] verifying NodePressure condition ... I0221 09:12:37.286204 481686 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki I0221 09:12:37.286240 481686 node_conditions.go:123] node cpu capacity is 8 I0221 09:12:37.286253 481686 node_conditions.go:105] duration metric: took 3.557872ms to run NodePressure ... I0221 09:12:37.286273 481686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml" I0221 09:12:37.453002 481686 kubeadm.go:737] waiting for restarted kubelet to initialise ... I0221 09:12:37.456863 481686 retry.go:31] will retry after 276.165072ms: kubelet not initialised I0221 09:12:37.736865 481686 retry.go:31] will retry after 540.190908ms: kubelet not initialised I0221 09:12:38.280617 481686 retry.go:31] will retry after 655.06503ms: kubelet not initialised I0221 09:12:38.939642 481686 retry.go:31] will retry after 791.196345ms: kubelet not initialised I0221 09:12:39.735022 481686 retry.go:31] will retry after 1.170244332s: kubelet not initialised I0221 09:12:40.909813 481686 retry.go:31] will retry after 2.253109428s: kubelet not initialised I0221 09:12:43.166877 481686 retry.go:31] will retry after 1.610739793s: kubelet not initialised I0221 09:12:44.782170 481686 retry.go:31] will retry after 2.804311738s: kubelet not initialised I0221 09:12:47.591132 481686 retry.go:31] will retry after 3.824918958s: kubelet not initialised I0221 09:12:51.421422 481686 retry.go:31] will retry after 7.69743562s: kubelet not initialised I0221 09:12:59.122620 481686 retry.go:31] will retry after 14.635568968s: kubelet not initialised I0221 09:13:13.762364 481686 kubeadm.go:752] kubelet initialised I0221 09:13:13.762387 481686 kubeadm.go:753] duration metric: took 36.309357684s waiting for restarted kubelet to initialise ... I0221 09:13:13.762394 481686 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 09:13:13.765803 481686 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-4jfjr" in "kube-system" namespace to be "Ready" ... I0221 09:13:13.773307 481686 pod_ready.go:92] pod "coredns-5644d7b6d9-4jfjr" in "kube-system" namespace has status "Ready":"True" I0221 09:13:13.773329 481686 pod_ready.go:81] duration metric: took 7.502811ms waiting for pod "coredns-5644d7b6d9-4jfjr" in "kube-system" namespace to be "Ready" ... I0221 09:13:13.773338 481686 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-vqqfc" in "kube-system" namespace to be "Ready" ... I0221 09:13:13.776551 481686 pod_ready.go:92] pod "coredns-5644d7b6d9-vqqfc" in "kube-system" namespace has status "Ready":"True" I0221 09:13:13.776568 481686 pod_ready.go:81] duration metric: took 3.225081ms waiting for pod "coredns-5644d7b6d9-vqqfc" in "kube-system" namespace to be "Ready" ... I0221 09:13:13.776577 481686 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-20220221090948-6550" in "kube-system" namespace to be "Ready" ... I0221 09:13:13.779686 481686 pod_ready.go:92] pod "etcd-old-k8s-version-20220221090948-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:13:13.779705 481686 pod_ready.go:81] duration metric: took 3.121899ms waiting for pod "etcd-old-k8s-version-20220221090948-6550" in "kube-system" namespace to be "Ready" ... I0221 09:13:13.779718 481686 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-20220221090948-6550" in "kube-system" namespace to be "Ready" ... I0221 09:13:13.782821 481686 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-20220221090948-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:13:13.782840 481686 pod_ready.go:81] duration metric: took 3.114979ms waiting for pod "kube-apiserver-old-k8s-version-20220221090948-6550" in "kube-system" namespace to be "Ready" ... I0221 09:13:13.782849 481686 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-20220221090948-6550" in "kube-system" namespace to be "Ready" ... I0221 09:13:14.161532 481686 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-20220221090948-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:13:14.161557 481686 pod_ready.go:81] duration metric: took 378.700547ms waiting for pod "kube-controller-manager-old-k8s-version-20220221090948-6550" in "kube-system" namespace to be "Ready" ... I0221 09:13:14.161570 481686 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tdxwc" in "kube-system" namespace to be "Ready" ... I0221 09:13:14.561430 481686 pod_ready.go:92] pod "kube-proxy-tdxwc" in "kube-system" namespace has status "Ready":"True" I0221 09:13:14.561454 481686 pod_ready.go:81] duration metric: took 399.878102ms waiting for pod "kube-proxy-tdxwc" in "kube-system" namespace to be "Ready" ... I0221 09:13:14.561463 481686 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-20220221090948-6550" in "kube-system" namespace to be "Ready" ... I0221 09:13:14.962123 481686 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-20220221090948-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:13:14.962149 481686 pod_ready.go:81] duration metric: took 400.67974ms waiting for pod "kube-scheduler-old-k8s-version-20220221090948-6550" in "kube-system" namespace to be "Ready" ... I0221 09:13:14.962160 481686 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace to be "Ready" ... I0221 09:13:17.367179 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:13:19.367534 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:13:21.866320 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:13:23.867171 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:13:26.367049 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:13:28.367351 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:13:30.866816 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" * * ==> Docker <== * -- Logs begin at Mon 2022-02-21 09:04:01 UTC, end at Mon 2022-02-21 09:13:35 UTC. -- Feb 21 09:04:03 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:04:03.680475703Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 21 09:04:03 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:04:03.681624533Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 21 09:04:03 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:04:03.681657617Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 21 09:04:03 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:04:03.681680124Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Feb 21 09:04:03 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:04:03.681693834Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 21 09:04:03 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:04:03.687754849Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Feb 21 09:04:03 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:04:03.693110176Z" level=warning msg="Your kernel does not support CPU realtime scheduler" Feb 21 09:04:03 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:04:03.693139050Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 21 09:04:03 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:04:03.693144878Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 21 09:04:03 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:04:03.693335705Z" level=info msg="Loading containers: start." Feb 21 09:04:03 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:04:03.777936498Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 21 09:04:03 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:04:03.813301783Z" level=info msg="Loading containers: done." Feb 21 09:04:03 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:04:03.824636992Z" level=info msg="Docker daemon" commit=459d0df graphdriver(s)=overlay2 version=20.10.12 Feb 21 09:04:03 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:04:03.824690941Z" level=info msg="Daemon has completed initialization" Feb 21 09:04:03 bridge-20220221084933-6550 systemd[1]: Started Docker Application Container Engine. Feb 21 09:04:03 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:04:03.843099718Z" level=info msg="API listen on [::]:2376" Feb 21 09:04:03 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:04:03.846792381Z" level=info msg="API listen on /var/run/docker.sock" Feb 21 09:04:41 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:04:41.138003188Z" level=info msg="ignoring event" container=a0be41a7d766cdaba9403bf9df8395ee04391f81bc4dbd0908e9d6ec829fc323 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:04:41 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:04:41.253100899Z" level=info msg="ignoring event" container=8a472a83eaf77bf4ed3c47adea9a900aa30c9f51e075e8c13198eae504dd5135 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:05:02 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:05:02.429884245Z" level=info msg="ignoring event" container=b217dfe43376c251bb43088d9560ae3139c324922c017d0a0045ec73b8ca947a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:05:32 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:05:32.908656803Z" level=info msg="ignoring event" container=8e3788818a6b1aae56233b447d95584be4c66937b500037d196c1e07e84f5828 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:06:17 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:06:17.566902390Z" level=info msg="ignoring event" container=bc46acfa3d7c866121ea03403a121b9e648442fceffd9a6a32c9256973a09d29 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:07:13 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:07:13.543450894Z" level=info msg="ignoring event" container=40d03e6cd1a30b184ea894fccdefa5fdc7c1bc310d94a582d45c491c646f47ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:08:34 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:08:34.577542651Z" level=info msg="ignoring event" container=dedfecc4ece76a44315ebba0e63995f63460bc4dc34f01432953ce831b08926f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:10:38 bridge-20220221084933-6550 dockerd[460]: time="2022-02-21T09:10:38.559805431Z" level=info msg="ignoring event" container=293c64d3f2e2a9f95c0d0cefe6c39000de6c3e12dcd13952722676d0fdea03ae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID e990cc7800b7c 6e38f40d628db 4 seconds ago Running storage-provisioner 6 58296d2ef92ae 293c64d3f2e2a 6e38f40d628db 3 minutes ago Exited storage-provisioner 5 58296d2ef92ae 4c6fcccfa1394 k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 4 minutes ago Running dnsutils 0 74ab993c5eef1 8eb32092067f9 a4ca41631cc7a 9 minutes ago Running coredns 0 b299fa78d336f cd31aa9c0c743 2114245ec4d6b 9 minutes ago Running kube-proxy 0 e5bc271195fab d092f7171bc6a 25444908517a5 9 minutes ago Running kube-controller-manager 0 79155ed30105b 6e69145b30ada aceacb6244f9f 9 minutes ago Running kube-scheduler 0 718e986929bb6 5eb857f7738e9 25f8c7f3da61c 9 minutes ago Running etcd 0 0691551fcb0ea 6a850a90d786b 62930710c9634 9 minutes ago Running kube-apiserver 0 3d86608597cbc * * ==> coredns [8eb32092067f] <== * [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" * * ==> describe nodes <== * Name: bridge-20220221084933-6550 Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=bridge-20220221084933-6550 kubernetes.io/os=linux minikube.k8s.io/commit=9a4c39b14752448ee8b5da7f8bb397c2a16d9ea9 minikube.k8s.io/name=bridge-20220221084933-6550 minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2022_02_21T09_04_17_0700 minikube.k8s.io/version=v1.25.1 node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 21 Feb 2022 09:04:13 +0000 Taints: Unschedulable: false Lease: HolderIdentity: bridge-20220221084933-6550 AcquireTime: RenewTime: Mon, 21 Feb 2022 09:13:28 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Mon, 21 Feb 2022 09:08:53 +0000 Mon, 21 Feb 2022 09:04:10 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Mon, 21 Feb 2022 09:08:53 +0000 Mon, 21 Feb 2022 09:04:10 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Mon, 21 Feb 2022 09:08:53 +0000 Mon, 21 Feb 2022 09:04:10 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Mon, 21 Feb 2022 09:08:53 +0000 Mon, 21 Feb 2022 09:04:27 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.67.2 Hostname: bridge-20220221084933-6550 Capacity: cpu: 8 ephemeral-storage: 304695084Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32874648Ki pods: 110 Allocatable: cpu: 8 ephemeral-storage: 304695084Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32874648Ki pods: 110 System Info: Machine ID: b6a262faae404a5db719705fd34b5c8b System UUID: f05716f6-a1c5-4503-b665-f7090020f00e Boot ID: 36f9c729-2a96-4807-bb74-314dc2113999 Kernel Version: 5.11.0-1029-gcp OS Image: Ubuntu 20.04.2 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.12 Kubelet Version: v1.23.4 Kube-Proxy Version: v1.23.4 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (8 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- default netcat-668db85669-f2pzb 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m53s kube-system coredns-64897985d-7jshp 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 9m6s kube-system etcd-bridge-20220221084933-6550 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 9m18s kube-system kube-apiserver-bridge-20220221084933-6550 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9m18s kube-system kube-controller-manager-bridge-20220221084933-6550 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9m18s kube-system kube-proxy-pzvfl 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9m6s kube-system kube-scheduler-bridge-20220221084933-6550 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9m18s kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9m4s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 750m (9%!)(MISSING) 0 (0%!)(MISSING) memory 170Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 9m5s kube-proxy Normal NodeHasSufficientMemory 9m18s kubelet Node bridge-20220221084933-6550 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 9m18s kubelet Node bridge-20220221084933-6550 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 9m18s kubelet Node bridge-20220221084933-6550 status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 9m18s kubelet Updated Node Allocatable limit across pods Normal Starting 9m18s kubelet Starting kubelet. Normal NodeReady 9m8s kubelet Node bridge-20220221084933-6550 status is now: NodeReady * * ==> dmesg <== * [ +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [Feb21 09:13] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +1.015846] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000013] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +1.027979] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +16.774814] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 [ +1.011852] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 [ +1.023907] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 [ +2.959842] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000008] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 [ +1.007853] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 [ +1.027910] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000008] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 [ +2.963841] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 [ +1.035853] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 [ +1.023933] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 * * ==> etcd [5eb857f7738e] <== * {"level":"info","ts":"2022-02-21T09:04:11.122Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"} {"level":"info","ts":"2022-02-21T09:04:11.122Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"} {"level":"info","ts":"2022-02-21T09:04:11.122Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"} {"level":"info","ts":"2022-02-21T09:04:11.122Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T09:04:11.123Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T09:04:11.123Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T09:04:11.123Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T09:04:11.123Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:bridge-20220221084933-6550 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"} {"level":"info","ts":"2022-02-21T09:04:11.123Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2022-02-21T09:04:11.123Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"} {"level":"info","ts":"2022-02-21T09:04:11.124Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"} {"level":"info","ts":"2022-02-21T09:04:11.123Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2022-02-21T09:04:11.125Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"} {"level":"info","ts":"2022-02-21T09:04:11.125Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"} {"level":"warn","ts":"2022-02-21T09:08:46.315Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"175.262593ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/\" range_end:\"/registry/endpointslices0\" count_only:true ","response":"range_response_count:0 size:7"} {"level":"info","ts":"2022-02-21T09:08:46.315Z","caller":"traceutil/trace.go:171","msg":"trace[1821785760] range","detail":"{range_begin:/registry/endpointslices/; range_end:/registry/endpointslices0; response_count:0; response_revision:603; }","duration":"175.39819ms","start":"2022-02-21T09:08:46.140Z","end":"2022-02-21T09:08:46.315Z","steps":["trace[1821785760] 'count revisions from in-memory index tree' (duration: 175.159153ms)"],"step_count":1} {"level":"warn","ts":"2022-02-21T09:08:46.315Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"185.499053ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:1 size:2539"} {"level":"info","ts":"2022-02-21T09:08:46.316Z","caller":"traceutil/trace.go:171","msg":"trace[653792341] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:1; response_revision:603; }","duration":"185.837506ms","start":"2022-02-21T09:08:46.130Z","end":"2022-02-21T09:08:46.316Z","steps":["trace[653792341] 'range keys from in-memory index tree' (duration: 185.385856ms)"],"step_count":1} {"level":"info","ts":"2022-02-21T09:08:46.522Z","caller":"traceutil/trace.go:171","msg":"trace[142678578] transaction","detail":"{read_only:false; response_revision:605; number_of_response:1; }","duration":"103.930628ms","start":"2022-02-21T09:08:46.418Z","end":"2022-02-21T09:08:46.522Z","steps":["trace[142678578] 'process raft request' (duration: 103.732829ms)"],"step_count":1} {"level":"info","ts":"2022-02-21T09:09:55.382Z","caller":"traceutil/trace.go:171","msg":"trace[2093338953] linearizableReadLoop","detail":"{readStateIndex:713; appliedIndex:712; }","duration":"151.068301ms","start":"2022-02-21T09:09:55.231Z","end":"2022-02-21T09:09:55.382Z","steps":["trace[2093338953] 'read index received' (duration: 52.660289ms)","trace[2093338953] 'applied index is now lower than readState.Index' (duration: 98.407058ms)"],"step_count":2} {"level":"info","ts":"2022-02-21T09:09:55.382Z","caller":"traceutil/trace.go:171","msg":"trace[419954547] transaction","detail":"{read_only:false; response_revision:631; number_of_response:1; }","duration":"232.574305ms","start":"2022-02-21T09:09:55.150Z","end":"2022-02-21T09:09:55.382Z","steps":["trace[419954547] 'process raft request' (duration: 134.193512ms)","trace[419954547] 'compare' (duration: 98.260909ms)"],"step_count":2} {"level":"warn","ts":"2022-02-21T09:09:55.382Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"151.218895ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" count_only:true ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-02-21T09:09:55.382Z","caller":"traceutil/trace.go:171","msg":"trace[1744173959] range","detail":"{range_begin:/registry/networkpolicies/; range_end:/registry/networkpolicies0; response_count:0; response_revision:631; }","duration":"151.266588ms","start":"2022-02-21T09:09:55.231Z","end":"2022-02-21T09:09:55.382Z","steps":["trace[1744173959] 'agreement among raft nodes before linearized reading' (duration: 151.177123ms)"],"step_count":1} {"level":"warn","ts":"2022-02-21T09:09:55.697Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"188.096018ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-02-21T09:09:55.697Z","caller":"traceutil/trace.go:171","msg":"trace[2037857497] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:631; }","duration":"188.171073ms","start":"2022-02-21T09:09:55.509Z","end":"2022-02-21T09:09:55.697Z","steps":["trace[2037857497] 'range keys from in-memory index tree' (duration: 188.020649ms)"],"step_count":1} * * ==> kernel <== * 09:13:35 up 56 min, 0 users, load average: 0.49, 1.43, 2.40 Linux bridge-20220221084933-6550 5.11.0-1029-gcp #33~20.04.3-Ubuntu SMP Tue Jan 18 12:03:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.2 LTS" * * ==> kube-apiserver [6a850a90d786] <== * I0221 09:04:13.663490 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller I0221 09:04:13.663528 1 apf_controller.go:322] Running API Priority and Fairness config worker I0221 09:04:13.663538 1 cache.go:39] Caches are synced for AvailableConditionController controller I0221 09:04:13.676956 1 cache.go:39] Caches are synced for autoregister controller I0221 09:04:13.680534 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0221 09:04:14.562572 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0221 09:04:14.566706 1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000 I0221 09:04:14.568768 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0221 09:04:14.570005 1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000 I0221 09:04:14.570021 1 storage_scheduling.go:109] all system priority classes are created successfully or already exist. I0221 09:04:15.024009 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0221 09:04:15.062781 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0221 09:04:15.135946 1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1] W0221 09:04:15.142919 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.67.2] I0221 09:04:15.144226 1 controller.go:611] quota admission added evaluator for: endpoints I0221 09:04:15.147956 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io I0221 09:04:15.718373 1 controller.go:611] quota admission added evaluator for: serviceaccounts I0221 09:04:16.867062 1 controller.go:611] quota admission added evaluator for: deployments.apps I0221 09:04:16.877719 1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10] I0221 09:04:16.902701 1 controller.go:611] quota admission added evaluator for: daemonsets.apps I0221 09:04:17.127153 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io I0221 09:04:29.273786 1 controller.go:611] quota admission added evaluator for: replicasets.apps I0221 09:04:29.373045 1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps I0221 09:04:30.105604 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io I0221 09:08:42.077781 1 alloc.go:329] "allocated clusterIPs" service="default/netcat" clusterIPs=map[IPv4:10.107.145.137] * * ==> kube-controller-manager [d092f7171bc6] <== * I0221 09:04:28.669670 1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: W0221 09:04:28.669737 1 node_lifecycle_controller.go:1012] Missing timestamp for Node bridge-20220221084933-6550. Assuming now as a timestamp. I0221 09:04:28.669778 1 node_lifecycle_controller.go:1213] Controller detected that zone is now in state Normal. I0221 09:04:28.669879 1 taint_manager.go:187] "Starting NoExecuteTaintManager" I0221 09:04:28.670174 1 event.go:294] "Event occurred" object="bridge-20220221084933-6550" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node bridge-20220221084933-6550 event: Registered Node bridge-20220221084933-6550 in Controller" I0221 09:04:28.670716 1 shared_informer.go:247] Caches are synced for ephemeral I0221 09:04:28.673581 1 shared_informer.go:247] Caches are synced for daemon sets I0221 09:04:28.715587 1 shared_informer.go:247] Caches are synced for disruption I0221 09:04:28.715612 1 disruption.go:371] Sending events to api server. I0221 09:04:28.715716 1 shared_informer.go:247] Caches are synced for stateful set I0221 09:04:28.718021 1 shared_informer.go:247] Caches are synced for namespace I0221 09:04:28.720447 1 shared_informer.go:247] Caches are synced for service account I0221 09:04:28.773772 1 shared_informer.go:247] Caches are synced for resource quota I0221 09:04:28.781129 1 shared_informer.go:247] Caches are synced for resource quota I0221 09:04:29.198204 1 shared_informer.go:247] Caches are synced for garbage collector I0221 09:04:29.214958 1 shared_informer.go:247] Caches are synced for garbage collector I0221 09:04:29.214983 1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0221 09:04:29.275759 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2" I0221 09:04:29.379313 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-pzvfl" I0221 09:04:29.578194 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-tl8l4" I0221 09:04:29.581729 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-7jshp" I0221 09:04:29.722333 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1" I0221 09:04:29.727479 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-tl8l4" I0221 09:08:42.096931 1 event.go:294] "Event occurred" object="default/netcat" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set netcat-668db85669 to 1" I0221 09:08:42.103134 1 event.go:294] "Event occurred" object="default/netcat-668db85669" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: netcat-668db85669-f2pzb" * * ==> kube-proxy [cd31aa9c0c74] <== * I0221 09:04:30.030088 1 node.go:163] Successfully retrieved node IP: 192.168.67.2 I0221 09:04:30.030160 1 server_others.go:138] "Detected node IP" address="192.168.67.2" I0221 09:04:30.030199 1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode="" I0221 09:04:30.053878 1 server_others.go:206] "Using iptables Proxier" I0221 09:04:30.053920 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4 I0221 09:04:30.053930 1 server_others.go:214] "Creating dualStackProxier for iptables" I0221 09:04:30.053961 1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6" I0221 09:04:30.054398 1 server.go:656] "Version info" version="v1.23.4" I0221 09:04:30.055053 1 config.go:317] "Starting service config controller" I0221 09:04:30.055088 1 shared_informer.go:240] Waiting for caches to sync for service config I0221 09:04:30.102436 1 config.go:226] "Starting endpoint slice config controller" I0221 09:04:30.102491 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I0221 09:04:30.155932 1 shared_informer.go:247] Caches are synced for service config I0221 09:04:30.203218 1 shared_informer.go:247] Caches are synced for endpoint slice config * * ==> kube-scheduler [6e69145b30ad] <== * W0221 09:04:13.645199 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0221 09:04:13.645279 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0221 09:04:13.645299 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0221 09:04:13.645306 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0221 09:04:13.645541 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0221 09:04:13.645590 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0221 09:04:13.645643 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0221 09:04:13.645673 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0221 09:04:13.646196 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0221 09:04:13.646281 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0221 09:04:14.478595 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0221 09:04:14.478636 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0221 09:04:14.628486 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0221 09:04:14.628528 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0221 09:04:14.685893 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0221 09:04:14.685922 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0221 09:04:14.756800 1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0221 09:04:14.756941 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0221 09:04:14.766918 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0221 09:04:14.766966 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0221 09:04:14.776948 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0221 09:04:14.777101 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0221 09:04:14.781410 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0221 09:04:14.781648 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope I0221 09:04:16.729661 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Mon 2022-02-21 09:04:01 UTC, end at Mon 2022-02-21 09:13:35 UTC. -- Feb 21 09:10:55 bridge-20220221084933-6550 kubelet[1944]: I0221 09:10:55.409689 1944 scope.go:110] "RemoveContainer" containerID="293c64d3f2e2a9f95c0d0cefe6c39000de6c3e12dcd13952722676d0fdea03ae" Feb 21 09:10:55 bridge-20220221084933-6550 kubelet[1944]: E0221 09:10:55.409969 1944 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0)\"" pod="kube-system/storage-provisioner" podUID=2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0 Feb 21 09:11:07 bridge-20220221084933-6550 kubelet[1944]: I0221 09:11:07.409264 1944 scope.go:110] "RemoveContainer" containerID="293c64d3f2e2a9f95c0d0cefe6c39000de6c3e12dcd13952722676d0fdea03ae" Feb 21 09:11:07 bridge-20220221084933-6550 kubelet[1944]: E0221 09:11:07.409562 1944 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0)\"" pod="kube-system/storage-provisioner" podUID=2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0 Feb 21 09:11:20 bridge-20220221084933-6550 kubelet[1944]: I0221 09:11:20.410060 1944 scope.go:110] "RemoveContainer" containerID="293c64d3f2e2a9f95c0d0cefe6c39000de6c3e12dcd13952722676d0fdea03ae" Feb 21 09:11:20 bridge-20220221084933-6550 kubelet[1944]: E0221 09:11:20.410285 1944 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0)\"" pod="kube-system/storage-provisioner" podUID=2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0 Feb 21 09:11:31 bridge-20220221084933-6550 kubelet[1944]: I0221 09:11:31.410110 1944 scope.go:110] "RemoveContainer" containerID="293c64d3f2e2a9f95c0d0cefe6c39000de6c3e12dcd13952722676d0fdea03ae" Feb 21 09:11:31 bridge-20220221084933-6550 kubelet[1944]: E0221 09:11:31.410324 1944 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0)\"" pod="kube-system/storage-provisioner" podUID=2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0 Feb 21 09:11:43 bridge-20220221084933-6550 kubelet[1944]: I0221 09:11:43.409785 1944 scope.go:110] "RemoveContainer" containerID="293c64d3f2e2a9f95c0d0cefe6c39000de6c3e12dcd13952722676d0fdea03ae" Feb 21 09:11:43 bridge-20220221084933-6550 kubelet[1944]: E0221 09:11:43.410086 1944 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0)\"" pod="kube-system/storage-provisioner" podUID=2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0 Feb 21 09:11:56 bridge-20220221084933-6550 kubelet[1944]: I0221 09:11:56.409250 1944 scope.go:110] "RemoveContainer" containerID="293c64d3f2e2a9f95c0d0cefe6c39000de6c3e12dcd13952722676d0fdea03ae" Feb 21 09:11:56 bridge-20220221084933-6550 kubelet[1944]: E0221 09:11:56.409476 1944 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0)\"" pod="kube-system/storage-provisioner" podUID=2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0 Feb 21 09:12:08 bridge-20220221084933-6550 kubelet[1944]: I0221 09:12:08.409182 1944 scope.go:110] "RemoveContainer" containerID="293c64d3f2e2a9f95c0d0cefe6c39000de6c3e12dcd13952722676d0fdea03ae" Feb 21 09:12:08 bridge-20220221084933-6550 kubelet[1944]: E0221 09:12:08.409469 1944 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0)\"" pod="kube-system/storage-provisioner" podUID=2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0 Feb 21 09:12:23 bridge-20220221084933-6550 kubelet[1944]: I0221 09:12:23.409337 1944 scope.go:110] "RemoveContainer" containerID="293c64d3f2e2a9f95c0d0cefe6c39000de6c3e12dcd13952722676d0fdea03ae" Feb 21 09:12:23 bridge-20220221084933-6550 kubelet[1944]: E0221 09:12:23.409560 1944 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0)\"" pod="kube-system/storage-provisioner" podUID=2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0 Feb 21 09:12:37 bridge-20220221084933-6550 kubelet[1944]: I0221 09:12:37.409441 1944 scope.go:110] "RemoveContainer" containerID="293c64d3f2e2a9f95c0d0cefe6c39000de6c3e12dcd13952722676d0fdea03ae" Feb 21 09:12:37 bridge-20220221084933-6550 kubelet[1944]: E0221 09:12:37.409740 1944 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0)\"" pod="kube-system/storage-provisioner" podUID=2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0 Feb 21 09:12:51 bridge-20220221084933-6550 kubelet[1944]: I0221 09:12:51.410017 1944 scope.go:110] "RemoveContainer" containerID="293c64d3f2e2a9f95c0d0cefe6c39000de6c3e12dcd13952722676d0fdea03ae" Feb 21 09:12:51 bridge-20220221084933-6550 kubelet[1944]: E0221 09:12:51.410235 1944 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0)\"" pod="kube-system/storage-provisioner" podUID=2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0 Feb 21 09:13:04 bridge-20220221084933-6550 kubelet[1944]: I0221 09:13:04.409239 1944 scope.go:110] "RemoveContainer" containerID="293c64d3f2e2a9f95c0d0cefe6c39000de6c3e12dcd13952722676d0fdea03ae" Feb 21 09:13:04 bridge-20220221084933-6550 kubelet[1944]: E0221 09:13:04.409475 1944 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0)\"" pod="kube-system/storage-provisioner" podUID=2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0 Feb 21 09:13:18 bridge-20220221084933-6550 kubelet[1944]: I0221 09:13:18.412224 1944 scope.go:110] "RemoveContainer" containerID="293c64d3f2e2a9f95c0d0cefe6c39000de6c3e12dcd13952722676d0fdea03ae" Feb 21 09:13:18 bridge-20220221084933-6550 kubelet[1944]: E0221 09:13:18.412491 1944 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0)\"" pod="kube-system/storage-provisioner" podUID=2fb4fff2-d992-4cc8-ab7c-f5b6f1cd40c0 Feb 21 09:13:31 bridge-20220221084933-6550 kubelet[1944]: I0221 09:13:31.409896 1944 scope.go:110] "RemoveContainer" containerID="293c64d3f2e2a9f95c0d0cefe6c39000de6c3e12dcd13952722676d0fdea03ae" * * ==> storage-provisioner [293c64d3f2e2] <== * I0221 09:10:08.540866 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... F0221 09:10:38.544484 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout * * ==> storage-provisioner [e990cc7800b7] <== * I0221 09:13:31.532505 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... -- /stdout -- helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p bridge-20220221084933-6550 -n bridge-20220221084933-6550 E0221 09:13:36.308558 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory helpers_test.go:262: (dbg) Run: kubectl --context bridge-20220221084933-6550 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running helpers_test.go:271: non-running pods: helpers_test.go:273: ======> post-mortem[TestNetworkPlugins/group/bridge]: describe non-running pods <====== helpers_test.go:276: (dbg) Run: kubectl --context bridge-20220221084933-6550 describe pod helpers_test.go:276: (dbg) Non-zero exit: kubectl --context bridge-20220221084933-6550 describe pod : exit status 1 (43.408452ms) ** stderr ** error: resource name may not be empty ** /stderr ** helpers_test.go:278: kubectl --context bridge-20220221084933-6550 describe pod : exit status 1 helpers_test.go:176: Cleaning up "bridge-20220221084933-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p bridge-20220221084933-6550 E0221 09:13:37.589308 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p bridge-20220221084933-6550: (2.64548242s) --- FAIL: TestNetworkPlugins/group/bridge (588.30s) === FAIL: . TestNetworkPlugins/group/enable-default-cni/DNS (360.29s) net_test.go:163: (dbg) Run: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.155817581s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default E0221 09:09:00.213316 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory E0221 09:09:05.984497 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.137360185s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.127057858s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.136062008s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.189892235s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default E0221 09:10:10.799921 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.143725941s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 09:10:29.028950 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory E0221 09:10:33.614245 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory net_test.go:163: (dbg) Run: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.128824565s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.1416818s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.133379348s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 09:11:44.054101 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory net_test.go:163: (dbg) Run: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.144342485s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.122491787s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 09:13:07.990646 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory net_test.go:163: (dbg) Run: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default E0221 09:14:29.911677 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.278098961s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1 net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"* --- FAIL: TestNetworkPlugins/group/enable-default-cni/DNS (360.29s) === FAIL: . TestNetworkPlugins/group/enable-default-cni (671.75s) net_test.go:198: "enable-default-cni" test finished in 25m4.867153999s, failed=true net_test.go:199: *** TestNetworkPlugins/group/enable-default-cni FAILED at 2022-02-21 09:14:38.628715991 +0000 UTC m=+2971.391035583 helpers_test.go:223: -----------------------post-mortem-------------------------------- helpers_test.go:231: ======> post-mortem[TestNetworkPlugins/group/enable-default-cni]: docker inspect <====== helpers_test.go:232: (dbg) Run: docker inspect enable-default-cni-20220221084933-6550 helpers_test.go:236: (dbg) docker inspect enable-default-cni-20220221084933-6550: -- stdout -- [ { "Id": "5870309f6f9284c8e963c0ab906cf1ca7e5c5fcaf057bbf1e1417e629a2119a7", "Created": "2022-02-21T09:03:39.327100743Z", "Path": "/usr/local/bin/entrypoint", "Args": [ "/sbin/init" ], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 444720, "ExitCode": 0, "Error": "", "StartedAt": "2022-02-21T09:03:39.776155311Z", "FinishedAt": "0001-01-01T00:00:00Z" }, "Image": "sha256:1312ccd2422d964b2df363d606d0c016d6acbc1ddf0211c26a74717f2897dc43", "ResolvConfPath": "/var/lib/docker/containers/5870309f6f9284c8e963c0ab906cf1ca7e5c5fcaf057bbf1e1417e629a2119a7/resolv.conf", "HostnamePath": "/var/lib/docker/containers/5870309f6f9284c8e963c0ab906cf1ca7e5c5fcaf057bbf1e1417e629a2119a7/hostname", "HostsPath": "/var/lib/docker/containers/5870309f6f9284c8e963c0ab906cf1ca7e5c5fcaf057bbf1e1417e629a2119a7/hosts", "LogPath": "/var/lib/docker/containers/5870309f6f9284c8e963c0ab906cf1ca7e5c5fcaf057bbf1e1417e629a2119a7/5870309f6f9284c8e963c0ab906cf1ca7e5c5fcaf057bbf1e1417e629a2119a7-json.log", "Name": "/enable-default-cni-20220221084933-6550", "RestartCount": 0, "Driver": "overlay2", "Platform": "linux", "MountLabel": "", "ProcessLabel": "", "AppArmorProfile": "unconfined", "ExecIDs": null, "HostConfig": { "Binds": [ "/lib/modules:/lib/modules:ro", "enable-default-cni-20220221084933-6550:/var" ], "ContainerIDFile": "", "LogConfig": { "Type": "json-file", "Config": {} }, "NetworkMode": "enable-default-cni-20220221084933-6550", "PortBindings": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ] }, "RestartPolicy": { "Name": "no", "MaximumRetryCount": 0 }, "AutoRemove": false, "VolumeDriver": "", "VolumesFrom": null, "CapAdd": null, "CapDrop": null, "CgroupnsMode": "host", "Dns": [], "DnsOptions": [], "DnsSearch": [], "ExtraHosts": null, "GroupAdd": null, "IpcMode": "private", "Cgroup": "", "Links": null, "OomScoreAdj": 0, "PidMode": "", "Privileged": true, "PublishAllPorts": false, "ReadonlyRootfs": false, "SecurityOpt": [ "seccomp=unconfined", "apparmor=unconfined", "label=disable" ], "Tmpfs": { "/run": "", "/tmp": "" }, "UTSMode": "", "UsernsMode": "", "ShmSize": 67108864, "Runtime": "runc", "ConsoleSize": [ 0, 0 ], "Isolation": "", "CpuShares": 0, "Memory": 0, "NanoCpus": 2000000000, "CgroupParent": "", "BlkioWeight": 0, "BlkioWeightDevice": [], "BlkioDeviceReadBps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteIOps": null, "CpuPeriod": 0, "CpuQuota": 0, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": [ { "PathOnHost": "/dev/fuse", "PathInContainer": "/dev/fuse", "CgroupPermissions": "rwm" } ], "DeviceCgroupRules": null, "DeviceRequests": null, "KernelMemory": 0, "KernelMemoryTCP": 0, "MemoryReservation": 0, "MemorySwap": 0, "MemorySwappiness": null, "OomKillDisable": false, "PidsLimit": null, "Ulimits": null, "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0, "MaskedPaths": null, "ReadonlyPaths": null }, "GraphDriver": { "Data": { "LowerDir": "/var/lib/docker/overlay2/cf93df48f7bab1864de803bed96e0e4a14ae3aa65638d92ed832158480bb2c5c-init/diff:/var/lib/docker/overlay2/a149cc5f42de5038ba3e5f21bd82a6356aa9456aec11a3690e569db72ca8eaf5/diff:/var/lib/docker/overlay2/fce4ef34d92ff253276822af091ec16578b924486e0e565c7f34b479bdbe1606/diff:/var/lib/docker/overlay2/a38e10ed288ad73a8ce637c6219d5b13318bb72210ce4fb6291dd57d238df611/diff:/var/lib/docker/overlay2/93c18ddce49a4eb2d344b0579d7f58df999d53bb329042f78a5b3f40296f83cd/diff:/var/lib/docker/overlay2/d9ee14262b31fff87924cf913d835b02182e6a8dc3783ecb5f67dbd29faa079c/diff:/var/lib/docker/overlay2/8a0b39fc5ebcb9761ad433102244050fab339d3e027a5a30a7e01d121fe362e4/diff:/var/lib/docker/overlay2/7a6b41deef37b6400fac0c94871f73040d4cae157de2e929b7a0bf96f0e353aa/diff:/var/lib/docker/overlay2/49d913ee587718efcf521a65de0109df5c2c42644f75690beed97a437d623b66/diff:/var/lib/docker/overlay2/89e8fa960313dfb6ece4007576bf5d89e2b0ea68a2bde9de5527647a7df6ff0f/diff:/var/lib/docker/overlay2/0d2632e6ee2f4198f3c7c0d37c31ee03d33864e0131ae85dd3001adecfae53db/diff:/var/lib/docker/overlay2/b28a3b2bbe2e231d00863aaaf6be28d2e5a76197f2be7934f18720a71f2435a8/diff:/var/lib/docker/overlay2/6f88e83ff5bb26bf909248d8df5eacf4d01f72523927fd7433a925c525fbd274/diff:/var/lib/docker/overlay2/c2547d18d2639fa842b0977952fa69cffd7b076e8db357ef0ec62fae9676619b/diff:/var/lib/docker/overlay2/34465ba37bb8eef87906c38cb891a738c54a16dbf47d27fef95cee96f0b62c4f/diff:/var/lib/docker/overlay2/50d3383518bb8e6a01e1cfcae621b8c2238dc54ae94d63606d145633045bc6cd/diff:/var/lib/docker/overlay2/83561f107aed2bf9cb840ff66aceb882d0e39bcced79ef27ce67c17e063ba536/diff:/var/lib/docker/overlay2/bfb7c0c91e0a7649286abe3a6b02df4585a931ffa3a6b1e4846aa1b98eb0d997/diff:/var/lib/docker/overlay2/80589044da151f3d3eed7824d236bc8a19697182abca8610eb34f1c3cae53bf0/diff:/var/lib/docker/overlay2/5a336e956e467ad50830aa4b9ed5b65cb3ef08e6d49a4900acd0440ca3c03c3a/diff:/var/lib/docker/overlay2/f7b3446c0dca80a2053183e360de8f99e3a31e87ad633fcc23f14b2045643596/diff:/var/lib/docker/overlay2/03688a16272f13f172ed8baa181df2a1702e08edbc9c5a4a767265adb2c00b79/diff:/var/lib/docker/overlay2/c5445afaabab58221dbffedde261b3f9948116ccba1344b6f3dd4fdfaf5c98cd/diff:/var/lib/docker/overlay2/704464fd20f23f93d0ad3c3c58103f01fc8a0d76d70bc191f718273e7412bb42/diff:/var/lib/docker/overlay2/a344f9ca1bf1b674ebf560c2a0498913ed90c836e3b4f5b1968022f7f7e54055/diff:/var/lib/docker/overlay2/929f053e6a327a7203e8d8c281f827d02ad922fdabbdfcc5453003daba88cdf4/diff:/var/lib/docker/overlay2/d6a160f0ffe94e5abf99d928a638f6a6220b24c7ba643a438c7557146d240f70/diff:/var/lib/docker/overlay2/c6b58e187a2df10fd55e19e6ad38c20f5ebf5e62617a59b6ce4afe0fc712c0f6/diff:/var/lib/docker/overlay2/fef43deeff878af42cc1f31a1f265b66e642aa166cec02eba5a0c82fb6d70872/diff:/var/lib/docker/overlay2/0d7bbf19182056a05efca89485df54da5fba9903000517fc9e7c2072aa7ba62e/diff:/var/lib/docker/overlay2/5840748964e9ce3890a2fa4fc2e3fb0635426f944b42b3128808a7aca816bf48/diff:/var/lib/docker/overlay2/a94126962e9f346960e3f570b96e0a6e987d255fcba30bf3fd76b227e66275c6/diff:/var/lib/docker/overlay2/211ba88e1557a352838b4896dff8efac70330c1424cff237613c4186994ec037/diff:/var/lib/docker/overlay2/c04720d23411d6693832990e2c79bc529ade609401eaeee8d159f5a6d74d986e/diff:/var/lib/docker/overlay2/70bfe328c89d3a3902a4ba591874fe936425b208ec8e39c2e25d60f21b212300/diff:/var/lib/docker/overlay2/7dcce8bec6dd9561387116c9f724b9101e94e15c0e19c7880ce2b4e96b2921a6/diff:/var/lib/docker/overlay2/81463813805e91be42d9f71b5c63823a80abaaa42d8638034eaddcd66c148310/diff:/var/lib/docker/overlay2/c7dbb1344cf7167b07d58d49902e0db9335bc92a71a324c15dadc14c0aa67c45/diff:/var/lib/docker/overlay2/909df0d6b92b92bcf6e376f0e33d120c53cd24ad0b997d31b5a84f3d1ab89769/diff:/var/lib/docker/overlay2/eacfeaee6ed4aa14e847adad2c6d03439c53944654cc2779cc5c80d239ab323c/diff:/var/lib/docker/overlay2/48c58492169f9043495b11b47439f4dcdeb168525e5bb3a8c130c1ddd5ec882d/diff:/var/lib/docker/overlay2/4520a69f91e766b7694473c2107cc0644394d65da900259eb5933779accdc86d/diff:/var/lib/docker/overlay2/68aafdd300c4b4ebf94976a352959e75fa170653a69e13da38650a447cd5cb2f/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394cfc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff", "MergedDir": "/var/lib/docker/overlay2/cf93df48f7bab1864de803bed96e0e4a14ae3aa65638d92ed832158480bb2c5c/merged", "UpperDir": "/var/lib/docker/overlay2/cf93df48f7bab1864de803bed96e0e4a14ae3aa65638d92ed832158480bb2c5c/diff", "WorkDir": "/var/lib/docker/overlay2/cf93df48f7bab1864de803bed96e0e4a14ae3aa65638d92ed832158480bb2c5c/work" }, "Name": "overlay2" }, "Mounts": [ { "Type": "bind", "Source": "/lib/modules", "Destination": "/lib/modules", "Mode": "ro", "RW": false, "Propagation": "rprivate" }, { "Type": "volume", "Name": "enable-default-cni-20220221084933-6550", "Source": "/var/lib/docker/volumes/enable-default-cni-20220221084933-6550/_data", "Destination": "/var", "Driver": "local", "Mode": "z", "RW": true, "Propagation": "" } ], "Config": { "Hostname": "enable-default-cni-20220221084933-6550", "Domainname": "", "User": "root", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "ExposedPorts": { "22/tcp": {}, "2376/tcp": {}, "32443/tcp": {}, "5000/tcp": {}, "8443/tcp": {} }, "Tty": true, "OpenStdin": false, "StdinOnce": false, "Env": [ "container=docker", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" ], "Cmd": null, "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2", "Volumes": null, "WorkingDir": "", "Entrypoint": [ "/usr/local/bin/entrypoint", "/sbin/init" ], "OnBuild": null, "Labels": { "created_by.minikube.sigs.k8s.io": "true", "mode.minikube.sigs.k8s.io": "enable-default-cni-20220221084933-6550", "name.minikube.sigs.k8s.io": "enable-default-cni-20220221084933-6550", "role.minikube.sigs.k8s.io": "" }, "StopSignal": "SIGRTMIN+3" }, "NetworkSettings": { "Bridge": "", "SandboxID": "62113ff6c1601877b00e3b9a107b91c292f3345ac201f7c3f1e01039af08dc28", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49389" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49388" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49385" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49387" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49386" } ] }, "SandboxKey": "/var/run/docker/netns/62113ff6c160", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null, "EndpointID": "", "Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "MacAddress": "", "Networks": { "enable-default-cni-20220221084933-6550": { "IPAMConfig": { "IPv4Address": "192.168.58.2" }, "Links": null, "Aliases": [ "5870309f6f92", "enable-default-cni-20220221084933-6550" ], "NetworkID": "3436ceea501355dda724417d7ee94ad045ea978227c60239b598f71c466f16a5", "EndpointID": "b305f3dcbfb8ac283a12706e619e07887bffe5d726304e09b99b47f88c19e0ea", "Gateway": "192.168.58.1", "IPAddress": "192.168.58.2", "IPPrefixLen": 24, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:c0:a8:3a:02", "DriverOpts": null } } } } ] -- /stdout -- helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p enable-default-cni-20220221084933-6550 -n enable-default-cni-20220221084933-6550 helpers_test.go:245: <<< TestNetworkPlugins/group/enable-default-cni FAILED: start of post-mortem logs <<< helpers_test.go:246: ======> post-mortem[TestNetworkPlugins/group/enable-default-cni]: minikube logs <====== helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 -p enable-default-cni-20220221084933-6550 logs -n 25 helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p enable-default-cni-20220221084933-6550 logs -n 25: (1.202371297s) helpers_test.go:253: TestNetworkPlugins/group/enable-default-cni logs: -- stdout -- * * ==> Audit <== * |---------|---------------------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|---------------------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| | -p | false-20220221084934-6550 logs | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:02:41 UTC | Mon, 21 Feb 2022 09:02:42 UTC | | | -n 25 | | | | | | | delete | -p false-20220221084934-6550 | false-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:02:43 UTC | Mon, 21 Feb 2022 09:02:46 UTC | | -p | custom-weave-20220221084934-6550 | custom-weave-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:27 UTC | Mon, 21 Feb 2022 09:03:28 UTC | | | logs -n 25 | | | | | | | delete | -p | custom-weave-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:29 UTC | Mon, 21 Feb 2022 09:03:32 UTC | | | custom-weave-20220221084934-6550 | | | | | | | start | -p kindnet-20220221084934-6550 | kindnet-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:02:46 UTC | Mon, 21 Feb 2022 09:03:35 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=kindnet --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p kindnet-20220221084934-6550 | kindnet-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:40 UTC | Mon, 21 Feb 2022 09:03:40 UTC | | | pgrep -a kubelet | | | | | | | -p | calico-20220221084934-6550 | calico-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:45 UTC | Mon, 21 Feb 2022 09:03:47 UTC | | | logs -n 25 | | | | | | | delete | -p calico-20220221084934-6550 | calico-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:48 UTC | Mon, 21 Feb 2022 09:03:50 UTC | | -p | auto-20220221084933-6550 logs | auto-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:07:20 UTC | Mon, 21 Feb 2022 09:07:22 UTC | | | -n 25 | | | | | | | delete | -p auto-20220221084933-6550 | auto-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:07:22 UTC | Mon, 21 Feb 2022 09:07:25 UTC | | start | -p | enable-default-cni-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:32 UTC | Mon, 21 Feb 2022 09:08:26 UTC | | | enable-default-cni-20220221084933-6550 | | | | | | | | --memory=2048 --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --enable-default-cni=true | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p | enable-default-cni-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:08:26 UTC | Mon, 21 Feb 2022 09:08:27 UTC | | | enable-default-cni-20220221084933-6550 | | | | | | | | pgrep -a kubelet | | | | | | | start | -p bridge-20220221084933-6550 | bridge-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:51 UTC | Mon, 21 Feb 2022 09:08:41 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=bridge --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p bridge-20220221084933-6550 | bridge-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:08:41 UTC | Mon, 21 Feb 2022 09:08:41 UTC | | | pgrep -a kubelet | | | | | | | -p | kindnet-20220221084934-6550 | kindnet-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:09:44 UTC | Mon, 21 Feb 2022 09:09:45 UTC | | | logs -n 25 | | | | | | | delete | -p kindnet-20220221084934-6550 | kindnet-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:09:45 UTC | Mon, 21 Feb 2022 09:09:48 UTC | | start | -p | old-k8s-version-20220221090948-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:09:48 UTC | Mon, 21 Feb 2022 09:11:57 UTC | | | old-k8s-version-20220221090948-6550 | | | | | | | | --memory=2200 --alsologtostderr | | | | | | | | --wait=true --kvm-network=default | | | | | | | | --kvm-qemu-uri=qemu:///system | | | | | | | | --disable-driver-mounts | | | | | | | | --keep-context=false | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | | --kubernetes-version=v1.16.0 | | | | | | | addons | enable metrics-server -p | old-k8s-version-20220221090948-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:12:06 UTC | Mon, 21 Feb 2022 09:12:06 UTC | | | old-k8s-version-20220221090948-6550 | | | | | | | | --images=MetricsServer=k8s.gcr.io/echoserver:1.4 | | | | | | | | --registries=MetricsServer=fake.domain | | | | | | | start | -p kubenet-20220221084933-6550 | kubenet-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:07:25 UTC | Mon, 21 Feb 2022 09:12:15 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --network-plugin=kubenet | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p kubenet-20220221084933-6550 | kubenet-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:12:15 UTC | Mon, 21 Feb 2022 09:12:15 UTC | | | pgrep -a kubelet | | | | | | | stop | -p | old-k8s-version-20220221090948-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:12:06 UTC | Mon, 21 Feb 2022 09:12:17 UTC | | | old-k8s-version-20220221090948-6550 | | | | | | | | --alsologtostderr -v=3 | | | | | | | addons | enable dashboard -p | old-k8s-version-20220221090948-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:12:17 UTC | Mon, 21 Feb 2022 09:12:17 UTC | | | old-k8s-version-20220221090948-6550 | | | | | | | | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 | | | | | | | -p | bridge-20220221084933-6550 | bridge-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:13:35 UTC | Mon, 21 Feb 2022 09:13:36 UTC | | | logs -n 25 | | | | | | | delete | -p bridge-20220221084933-6550 | bridge-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:13:36 UTC | Mon, 21 Feb 2022 09:13:39 UTC | | start | -p no-preload-20220221091339-6550 | no-preload-20220221091339-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:13:39 UTC | Mon, 21 Feb 2022 09:14:33 UTC | | | --memory=2200 --alsologtostderr | | | | | | | | --wait=true --preload=false | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | | --kubernetes-version=v1.23.5-rc.0 | | | | | | |---------|---------------------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| * * ==> Last Start <== * Log file created at: 2022/02/21 09:13:39 Running on machine: ubuntu-20-agent-5 Binary: Built with gc go1.17.7 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0221 09:13:39.344379 488103 out.go:297] Setting OutFile to fd 1 ... I0221 09:13:39.344478 488103 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 09:13:39.344488 488103 out.go:310] Setting ErrFile to fd 2... I0221 09:13:39.344492 488103 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 09:13:39.344595 488103 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/bin I0221 09:13:39.344892 488103 out.go:304] Setting JSON to false I0221 09:13:39.346789 488103 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":3374,"bootTime":1645431446,"procs":677,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"} I0221 09:13:39.346878 488103 start.go:122] virtualization: kvm guest I0221 09:13:39.349398 488103 out.go:176] * [no-preload-20220221091339-6550] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64) I0221 09:13:39.350741 488103 out.go:176] - MINIKUBE_LOCATION=13641 I0221 09:13:39.349556 488103 notify.go:193] Checking for updates... I0221 09:13:39.352383 488103 out.go:176] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true I0221 09:13:39.353788 488103 out.go:176] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 09:13:39.355117 488103 out.go:176] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube I0221 09:13:39.356401 488103 out.go:176] - MINIKUBE_BIN=out/minikube-linux-amd64 I0221 09:13:39.356948 488103 config.go:176] Loaded profile config "enable-default-cni-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:13:39.357057 488103 config.go:176] Loaded profile config "kubenet-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:13:39.357179 488103 config.go:176] Loaded profile config "old-k8s-version-20220221090948-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0 I0221 09:13:39.357233 488103 driver.go:344] Setting default libvirt URI to qemu:///system I0221 09:13:39.403528 488103 docker.go:132] docker version: linux-20.10.12 I0221 09:13:39.403617 488103 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 09:13:39.496690 488103 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-02-21 09:13:39.434778262 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 09:13:39.496825 488103 docker.go:237] overlay module found I0221 09:13:39.499606 488103 out.go:176] * Using the docker driver based on user configuration I0221 09:13:39.499632 488103 start.go:281] selected driver: docker I0221 09:13:39.499637 488103 start.go:798] validating driver "docker" against I0221 09:13:39.499657 488103 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} W0221 09:13:39.499712 488103 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. W0221 09:13:39.499733 488103 out.go:241] ! Your cgroup does not allow setting memory. I0221 09:13:39.501118 488103 out.go:176] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities I0221 09:13:39.501718 488103 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 09:13:39.594032 488103 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-02-21 09:13:39.532202908 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 09:13:39.594178 488103 start_flags.go:288] no existing cluster config was found, will generate one from the flags I0221 09:13:39.594318 488103 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m I0221 09:13:39.594342 488103 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] I0221 09:13:39.594358 488103 cni.go:93] Creating CNI manager for "" I0221 09:13:39.594366 488103 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0221 09:13:39.594374 488103 start_flags.go:302] config: {Name:no-preload-20220221091339-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5-rc.0 ClusterName:no-preload-20220221091339-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 09:13:39.596655 488103 out.go:176] * Starting control plane node no-preload-20220221091339-6550 in cluster no-preload-20220221091339-6550 I0221 09:13:39.596693 488103 cache.go:120] Beginning downloading kic base image for docker with docker I0221 09:13:39.598166 488103 out.go:176] * Pulling base image ... I0221 09:13:39.598214 488103 preload.go:132] Checking if preload exists for k8s version v1.23.5-rc.0 and runtime docker I0221 09:13:39.598325 488103 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon I0221 09:13:39.598355 488103 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/config.json ... I0221 09:13:39.598395 488103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/config.json: {Name:mka1935bea8c99f28dd349264d0742b49f686366 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:13:39.598520 488103 cache.go:107] acquiring lock: {Name:mkae39637d54454769ea96c0928557495a2624a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:13:39.598520 488103 cache.go:107] acquiring lock: {Name:mkf4838fe0f0754a09f1960b33e83e9fd73716a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:13:39.598567 488103 cache.go:107] acquiring lock: {Name:mkc848fd9c1e80ffd1414dd8603c19c641b3fcb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:13:39.598659 488103 cache.go:107] acquiring lock: {Name:mk048af2cde148e8a512f7653817cea4bb1a47e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:13:39.598647 488103 cache.go:107] acquiring lock: {Name:mkd0cd2ae3afc8e39e716bbcd5f1e196bdbc0e1b Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:13:39.598667 488103 cache.go:107] acquiring lock: {Name:mk8eae83c87e69d4f61d57feebab23b9c618f6ed Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:13:39.598666 488103 cache.go:107] acquiring lock: {Name:mk4db3a52d1f4fba9dc9223f3164cb8742f00f2f Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:13:39.598675 488103 cache.go:107] acquiring lock: {Name:mk8cb7540d8a1bd7faccdcc974630f93843749a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:13:39.598703 488103 cache.go:107] acquiring lock: {Name:mk9f52e4209628388c7268565716f70b6a94e740 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:13:39.598735 488103 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1 exists I0221 09:13:39.598760 488103 cache.go:96] cache image "docker.io/kubernetesui/dashboard:v2.3.1" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1" took 117.095µs I0221 09:13:39.598776 488103 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 exists I0221 09:13:39.598778 488103 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7 exists I0221 09:13:39.598776 488103 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.5-rc.0 exists I0221 09:13:39.598777 488103 cache.go:107] acquiring lock: {Name:mk0340c3f1bf4216c7deeea4078501a3da4b3533 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:13:39.598797 488103 cache.go:96] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.7" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7" took 288.386µs I0221 09:13:39.598781 488103 cache.go:80] save to tar file docker.io/kubernetesui/dashboard:v2.3.1 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1 succeeded I0221 09:13:39.598814 488103 cache.go:80] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.7 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7 succeeded I0221 09:13:39.598735 488103 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.5-rc.0 exists I0221 09:13:39.598822 488103 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists I0221 09:13:39.598825 488103 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.5-rc.0 exists I0221 09:13:39.598829 488103 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.23.5-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.5-rc.0" took 174.192µs I0221 09:13:39.598825 488103 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 exists I0221 09:13:39.598842 488103 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.23.5-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.5-rc.0 succeeded I0221 09:13:39.598838 488103 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 222.709µs I0221 09:13:39.598855 488103 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.5-rc.0 exists I0221 09:13:39.598856 488103 cache.go:96] cache image "k8s.gcr.io/pause:3.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6" took 291.323µs I0221 09:13:39.598864 488103 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 exists I0221 09:13:39.598868 488103 cache.go:80] save to tar file k8s.gcr.io/pause:3.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 succeeded I0221 09:13:39.598874 488103 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.23.5-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.5-rc.0" took 100.458µs I0221 09:13:39.598891 488103 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.23.5-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.5-rc.0 succeeded I0221 09:13:39.598796 488103 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6" took 131.724µs I0221 09:13:39.598901 488103 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 succeeded I0221 09:13:39.598877 488103 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.1-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0" took 345.743µs I0221 09:13:39.598908 488103 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.1-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 succeeded I0221 09:13:39.598857 488103 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded I0221 09:13:39.598801 488103 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.23.5-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.5-rc.0" took 100.527µs I0221 09:13:39.598922 488103 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.23.5-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.5-rc.0 succeeded I0221 09:13:39.598841 488103 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.23.5-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.5-rc.0" took 337.448µs I0221 09:13:39.598944 488103 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.23.5-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.5-rc.0 succeeded I0221 09:13:39.598955 488103 cache.go:87] Successfully saved all images to host disk. I0221 09:13:39.644932 488103 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon, skipping pull I0221 09:13:39.644975 488103 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 exists in daemon, skipping load I0221 09:13:39.644992 488103 cache.go:208] Successfully downloaded all kic artifacts I0221 09:13:39.645040 488103 start.go:313] acquiring machines lock for no-preload-20220221091339-6550: {Name:mk3240de6571e839de8f8161d174b6e05c7d8988 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:13:39.645186 488103 start.go:317] acquired machines lock for "no-preload-20220221091339-6550" in 121.461µs I0221 09:13:39.645211 488103 start.go:89] Provisioning new machine with config: &{Name:no-preload-20220221091339-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5-rc.0 ClusterName:no-preload-20220221091339-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.5-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true} I0221 09:13:39.645300 488103 start.go:126] createHost starting for "" (driver="docker") I0221 09:13:38.369177 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:13:40.867726 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:13:39.647694 488103 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ... I0221 09:13:39.647941 488103 start.go:160] libmachine.API.Create for "no-preload-20220221091339-6550" (driver="docker") I0221 09:13:39.647977 488103 client.go:168] LocalClient.Create starting I0221 09:13:39.648053 488103 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem I0221 09:13:39.648090 488103 main.go:130] libmachine: Decoding PEM data... I0221 09:13:39.648111 488103 main.go:130] libmachine: Parsing certificate... I0221 09:13:39.648190 488103 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem I0221 09:13:39.648233 488103 main.go:130] libmachine: Decoding PEM data... I0221 09:13:39.648252 488103 main.go:130] libmachine: Parsing certificate... I0221 09:13:39.648667 488103 cli_runner.go:133] Run: docker network inspect no-preload-20220221091339-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0221 09:13:39.682574 488103 cli_runner.go:180] docker network inspect no-preload-20220221091339-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0221 09:13:39.682642 488103 network_create.go:254] running [docker network inspect no-preload-20220221091339-6550] to gather additional debugging logs... I0221 09:13:39.682665 488103 cli_runner.go:133] Run: docker network inspect no-preload-20220221091339-6550 W0221 09:13:39.718056 488103 cli_runner.go:180] docker network inspect no-preload-20220221091339-6550 returned with exit code 1 I0221 09:13:39.718088 488103 network_create.go:257] error running [docker network inspect no-preload-20220221091339-6550]: docker network inspect no-preload-20220221091339-6550: exit status 1 stdout: [] stderr: Error: No such network: no-preload-20220221091339-6550 I0221 09:13:39.718118 488103 network_create.go:259] output of [docker network inspect no-preload-20220221091339-6550]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: no-preload-20220221091339-6550 ** /stderr ** I0221 09:13:39.718181 488103 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 09:13:39.753279 488103 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-702b27ce9c6c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:28:47:23:7f}} I0221 09:13:39.754138 488103 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-3436ceea5013 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:ca:78:ad:42}} I0221 09:13:39.755228 488103 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc000114198] misses:0} I0221 09:13:39.755270 488103 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}} I0221 09:13:39.755296 488103 network_create.go:106] attempt to create docker network no-preload-20220221091339-6550 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ... I0221 09:13:39.755356 488103 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220221091339-6550 I0221 09:13:39.825551 488103 network_create.go:90] docker network no-preload-20220221091339-6550 192.168.67.0/24 created I0221 09:13:39.825583 488103 kic.go:106] calculated static IP "192.168.67.2" for the "no-preload-20220221091339-6550" container I0221 09:13:39.825652 488103 cli_runner.go:133] Run: docker ps -a --format {{.Names}} I0221 09:13:39.861028 488103 cli_runner.go:133] Run: docker volume create no-preload-20220221091339-6550 --label name.minikube.sigs.k8s.io=no-preload-20220221091339-6550 --label created_by.minikube.sigs.k8s.io=true I0221 09:13:39.896121 488103 oci.go:102] Successfully created a docker volume no-preload-20220221091339-6550 I0221 09:13:39.896221 488103 cli_runner.go:133] Run: docker run --rm --name no-preload-20220221091339-6550-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20220221091339-6550 --entrypoint /usr/bin/test -v no-preload-20220221091339-6550:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -d /var/lib I0221 09:13:40.442915 488103 oci.go:106] Successfully prepared a docker volume no-preload-20220221091339-6550 I0221 09:13:40.442979 488103 preload.go:132] Checking if preload exists for k8s version v1.23.5-rc.0 and runtime docker W0221 09:13:40.443043 488103 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted. W0221 09:13:40.443052 488103 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. I0221 09:13:40.443100 488103 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'" I0221 09:13:40.538914 488103 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-20220221091339-6550 --name no-preload-20220221091339-6550 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20220221091339-6550 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-20220221091339-6550 --network no-preload-20220221091339-6550 --ip 192.168.67.2 --volume no-preload-20220221091339-6550:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 I0221 09:13:40.958501 488103 cli_runner.go:133] Run: docker container inspect no-preload-20220221091339-6550 --format={{.State.Running}} I0221 09:13:40.997225 488103 cli_runner.go:133] Run: docker container inspect no-preload-20220221091339-6550 --format={{.State.Status}} I0221 09:13:41.032772 488103 cli_runner.go:133] Run: docker exec no-preload-20220221091339-6550 stat /var/lib/dpkg/alternatives/iptables I0221 09:13:41.103834 488103 oci.go:281] the created container "no-preload-20220221091339-6550" has a running status. I0221 09:13:41.103871 488103 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/no-preload-20220221091339-6550/id_rsa... I0221 09:13:41.230681 488103 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/no-preload-20220221091339-6550/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0221 09:13:41.322050 488103 cli_runner.go:133] Run: docker container inspect no-preload-20220221091339-6550 --format={{.State.Status}} I0221 09:13:41.360388 488103 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0221 09:13:41.360414 488103 kic_runner.go:114] Args: [docker exec --privileged no-preload-20220221091339-6550 chown docker:docker /home/docker/.ssh/authorized_keys] I0221 09:13:41.453502 488103 cli_runner.go:133] Run: docker container inspect no-preload-20220221091339-6550 --format={{.State.Status}} I0221 09:13:41.497205 488103 machine.go:88] provisioning docker machine ... I0221 09:13:41.497243 488103 ubuntu.go:169] provisioning hostname "no-preload-20220221091339-6550" I0221 09:13:41.497302 488103 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:13:41.537889 488103 main.go:130] libmachine: Using SSH client type: native I0221 09:13:41.538087 488103 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49414 } I0221 09:13:41.538103 488103 main.go:130] libmachine: About to run SSH command: sudo hostname no-preload-20220221091339-6550 && echo "no-preload-20220221091339-6550" | sudo tee /etc/hostname I0221 09:13:41.672020 488103 main.go:130] libmachine: SSH cmd err, output: : no-preload-20220221091339-6550 I0221 09:13:41.672091 488103 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:13:41.706730 488103 main.go:130] libmachine: Using SSH client type: native I0221 09:13:41.706865 488103 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49414 } I0221 09:13:41.706883 488103 main.go:130] libmachine: About to run SSH command: if ! grep -xq '.*\sno-preload-20220221091339-6550' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20220221091339-6550/g' /etc/hosts; else echo '127.0.1.1 no-preload-20220221091339-6550' | sudo tee -a /etc/hosts; fi fi I0221 09:13:41.830905 488103 main.go:130] libmachine: SSH cmd err, output: : I0221 09:13:41.830942 488103 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube} I0221 09:13:41.830958 488103 ubuntu.go:177] setting up certificates I0221 09:13:41.830971 488103 provision.go:83] configureAuth start I0221 09:13:41.831055 488103 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220221091339-6550 I0221 09:13:41.865655 488103 provision.go:138] copyHostCerts I0221 09:13:41.865724 488103 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem, removing ... I0221 09:13:41.865734 488103 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem I0221 09:13:41.865815 488103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem (1082 bytes) I0221 09:13:41.865907 488103 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem, removing ... I0221 09:13:41.865933 488103 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem I0221 09:13:41.865964 488103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem (1123 bytes) I0221 09:13:41.866043 488103 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem, removing ... I0221 09:13:41.866057 488103 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem I0221 09:13:41.866086 488103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem (1675 bytes) I0221 09:13:41.866155 488103 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem org=jenkins.no-preload-20220221091339-6550 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20220221091339-6550] I0221 09:13:42.128981 488103 provision.go:172] copyRemoteCerts I0221 09:13:42.129042 488103 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0221 09:13:42.129079 488103 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:13:42.164031 488103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49414 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/no-preload-20220221091339-6550/id_rsa Username:docker} I0221 09:13:42.250851 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes) I0221 09:13:42.269267 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes) I0221 09:13:42.288702 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I0221 09:13:42.307301 488103 provision.go:86] duration metric: configureAuth took 476.316023ms I0221 09:13:42.307335 488103 ubuntu.go:193] setting minikube options for container-runtime I0221 09:13:42.307536 488103 config.go:176] Loaded profile config "no-preload-20220221091339-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5-rc.0 I0221 09:13:42.307596 488103 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:13:42.343570 488103 main.go:130] libmachine: Using SSH client type: native I0221 09:13:42.343712 488103 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49414 } I0221 09:13:42.343726 488103 main.go:130] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0221 09:13:42.463140 488103 main.go:130] libmachine: SSH cmd err, output: : overlay I0221 09:13:42.463164 488103 ubuntu.go:71] root file system type: overlay I0221 09:13:42.463293 488103 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0221 09:13:42.463344 488103 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:13:42.497372 488103 main.go:130] libmachine: Using SSH client type: native I0221 09:13:42.497513 488103 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49414 } I0221 09:13:42.497574 488103 main.go:130] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0221 09:13:42.627970 488103 main.go:130] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0221 09:13:42.628056 488103 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:13:42.664000 488103 main.go:130] libmachine: Using SSH client type: native I0221 09:13:42.664164 488103 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49414 } I0221 09:13:42.664184 488103 main.go:130] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0221 09:13:43.325731 488103 main.go:130] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2021-12-13 11:43:42.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2022-02-21 09:13:42.619122114 +0000 @@ -1,30 +1,32 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com +BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target -Requires=docker.socket containerd.service +Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always - -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 +Restart=on-failure -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +34,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0221 09:13:43.325769 488103 machine.go:91] provisioned docker machine in 1.828543141s I0221 09:13:43.325779 488103 client.go:171] LocalClient.Create took 3.677794054s I0221 09:13:43.325796 488103 start.go:168] duration metric: libmachine.API.Create for "no-preload-20220221091339-6550" took 3.677856275s I0221 09:13:43.325810 488103 start.go:267] post-start starting for "no-preload-20220221091339-6550" (driver="docker") I0221 09:13:43.325821 488103 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0221 09:13:43.325879 488103 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0221 09:13:43.325916 488103 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:13:43.361077 488103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49414 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/no-preload-20220221091339-6550/id_rsa Username:docker} I0221 09:13:43.450978 488103 ssh_runner.go:195] Run: cat /etc/os-release I0221 09:13:43.453753 488103 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0221 09:13:43.453776 488103 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0221 09:13:43.453783 488103 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0221 09:13:43.453788 488103 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0221 09:13:43.453797 488103 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/addons for local assets ... I0221 09:13:43.453844 488103 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files for local assets ... I0221 09:13:43.453909 488103 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem -> 65502.pem in /etc/ssl/certs I0221 09:13:43.453979 488103 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs I0221 09:13:43.460659 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /etc/ssl/certs/65502.pem (1708 bytes) I0221 09:13:43.478443 488103 start.go:270] post-start completed in 152.616099ms I0221 09:13:43.478780 488103 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220221091339-6550 I0221 09:13:43.513485 488103 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/config.json ... I0221 09:13:43.513709 488103 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0221 09:13:43.513749 488103 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:13:43.547929 488103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49414 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/no-preload-20220221091339-6550/id_rsa Username:docker} I0221 09:13:43.631425 488103 start.go:129] duration metric: createHost completed in 3.986113499s I0221 09:13:43.631458 488103 start.go:80] releasing machines lock for "no-preload-20220221091339-6550", held for 3.986260089s I0221 09:13:43.631557 488103 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220221091339-6550 I0221 09:13:43.666430 488103 ssh_runner.go:195] Run: systemctl --version I0221 09:13:43.666485 488103 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/ I0221 09:13:43.666549 488103 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:13:43.666486 488103 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:13:43.704263 488103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49414 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/no-preload-20220221091339-6550/id_rsa Username:docker} I0221 09:13:43.704451 488103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49414 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/no-preload-20220221091339-6550/id_rsa Username:docker} I0221 09:13:43.932517 488103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0221 09:13:43.941933 488103 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 09:13:43.951210 488103 cruntime.go:272] skipping containerd shutdown because we are bound to it I0221 09:13:43.951273 488103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0221 09:13:43.960457 488103 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0221 09:13:43.973513 488103 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0221 09:13:44.055313 488103 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0221 09:13:44.132862 488103 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 09:13:44.143117 488103 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0221 09:13:44.218273 488103 ssh_runner.go:195] Run: sudo systemctl start docker I0221 09:13:44.228554 488103 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 09:13:44.272600 488103 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 09:13:44.315467 488103 out.go:203] * Preparing Kubernetes v1.23.5-rc.0 on Docker 20.10.12 ... I0221 09:13:44.315529 488103 cli_runner.go:133] Run: docker network inspect no-preload-20220221091339-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 09:13:44.348846 488103 ssh_runner.go:195] Run: grep 192.168.67.1 host.minikube.internal$ /etc/hosts I0221 09:13:44.352219 488103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 09:13:43.366865 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:13:45.367971 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:13:47.867419 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:13:44.363594 488103 out.go:176] - kubelet.housekeeping-interval=5m I0221 09:13:44.363685 488103 preload.go:132] Checking if preload exists for k8s version v1.23.5-rc.0 and runtime docker I0221 09:13:44.363734 488103 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0221 09:13:44.397396 488103 docker.go:606] Got preloaded images: I0221 09:13:44.397418 488103 docker.go:612] k8s.gcr.io/kube-apiserver:v1.23.5-rc.0 wasn't preloaded I0221 09:13:44.397423 488103 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.23.5-rc.0 k8s.gcr.io/kube-controller-manager:v1.23.5-rc.0 k8s.gcr.io/kube-scheduler:v1.23.5-rc.0 k8s.gcr.io/kube-proxy:v1.23.5-rc.0 k8s.gcr.io/pause:3.6 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5 docker.io/kubernetesui/dashboard:v2.3.1 docker.io/kubernetesui/metrics-scraper:v1.0.7] I0221 09:13:44.398800 488103 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.23.5-rc.0 I0221 09:13:44.398810 488103 image.go:134] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.7 I0221 09:13:44.398799 488103 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.23.5-rc.0 I0221 09:13:44.398877 488103 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.23.5-rc.0 I0221 09:13:44.399066 488103 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.23.5-rc.0 I0221 09:13:44.399227 488103 image.go:134] retrieving image: docker.io/kubernetesui/dashboard:v2.3.1 I0221 09:13:44.399557 488103 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/pause:3.6 I0221 09:13:44.399572 488103 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5 I0221 09:13:44.399557 488103 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/etcd:3.5.1-0 I0221 09:13:44.399605 488103 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/coredns/coredns:v1.8.6 I0221 09:13:44.399686 488103 image.go:180] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.23.5-rc.0: Error response from daemon: reference does not exist I0221 09:13:44.399691 488103 image.go:180] daemon lookup for k8s.gcr.io/kube-apiserver:v1.23.5-rc.0: Error response from daemon: reference does not exist I0221 09:13:44.399699 488103 image.go:180] daemon lookup for k8s.gcr.io/kube-proxy:v1.23.5-rc.0: Error response from daemon: reference does not exist I0221 09:13:44.399696 488103 image.go:180] daemon lookup for docker.io/kubernetesui/metrics-scraper:v1.0.7: Error response from daemon: reference does not exist I0221 09:13:44.399977 488103 image.go:180] daemon lookup for k8s.gcr.io/kube-scheduler:v1.23.5-rc.0: Error response from daemon: reference does not exist I0221 09:13:44.399998 488103 image.go:180] daemon lookup for docker.io/kubernetesui/dashboard:v2.3.1: Error response from daemon: reference does not exist I0221 09:13:44.443356 488103 cache_images.go:116] "k8s.gcr.io/etcd:3.5.1-0" needs transfer: "k8s.gcr.io/etcd:3.5.1-0" does not exist at hash "sha256:25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d" in container runtime I0221 09:13:44.443404 488103 docker.go:287] Removing image: k8s.gcr.io/etcd:3.5.1-0 I0221 09:13:44.443444 488103 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/etcd:3.5.1-0 I0221 09:13:44.443790 488103 cache_images.go:116] "k8s.gcr.io/pause:3.6" needs transfer: "k8s.gcr.io/pause:3.6" does not exist at hash "sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee" in container runtime I0221 09:13:44.443801 488103 cache_images.go:116] "k8s.gcr.io/coredns/coredns:v1.8.6" needs transfer: "k8s.gcr.io/coredns/coredns:v1.8.6" does not exist at hash "sha256:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime I0221 09:13:44.443832 488103 docker.go:287] Removing image: k8s.gcr.io/coredns/coredns:v1.8.6 I0221 09:13:44.443842 488103 docker.go:287] Removing image: k8s.gcr.io/pause:3.6 I0221 09:13:44.443863 488103 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/coredns/coredns:v1.8.6 I0221 09:13:44.443883 488103 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime I0221 09:13:44.443894 488103 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/pause:3.6 I0221 09:13:44.443910 488103 docker.go:287] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5 I0221 09:13:44.443945 488103 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5 I0221 09:13:44.536938 488103 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 I0221 09:13:44.537026 488103 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.1-0 I0221 09:13:44.544457 488103 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 I0221 09:13:44.544517 488103 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 I0221 09:13:44.544550 488103 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5 I0221 09:13:44.544567 488103 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.6 I0221 09:13:44.544469 488103 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 I0221 09:13:44.544632 488103 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6 I0221 09:13:44.544650 488103 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.1-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.1-0: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/etcd_3.5.1-0': No such file or directory I0221 09:13:44.544663 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 --> /var/lib/minikube/images/etcd_3.5.1-0 (112381440 bytes) I0221 09:13:44.604192 488103 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.6: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/pause_3.6': No such file or directory I0221 09:13:44.604233 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 --> /var/lib/minikube/images/pause_3.6 (325632 bytes) I0221 09:13:44.604279 488103 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory I0221 09:13:44.604315 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (15603712 bytes) I0221 09:13:44.604329 488103 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory I0221 09:13:44.604357 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (10569216 bytes) I0221 09:13:44.649059 488103 docker.go:254] Loading image: /var/lib/minikube/images/pause_3.6 I0221 09:13:44.649086 488103 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.6 | docker load" I0221 09:13:44.947516 488103 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 from cache I0221 09:13:44.947561 488103 docker.go:254] Loading image: /var/lib/minikube/images/storage-provisioner_v5 I0221 09:13:44.947576 488103 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load" I0221 09:13:45.466186 488103 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache I0221 09:13:45.466230 488103 docker.go:254] Loading image: /var/lib/minikube/images/coredns_v1.8.6 I0221 09:13:45.466255 488103 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load" I0221 09:13:45.923313 488103 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.23.5-rc.0 I0221 09:13:46.043249 488103 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.23.5-rc.0 I0221 09:13:46.044357 488103 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.23.5-rc.0 I0221 09:13:46.159542 488103 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 from cache I0221 09:13:46.159590 488103 docker.go:254] Loading image: /var/lib/minikube/images/etcd_3.5.1-0 I0221 09:13:46.159612 488103 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.1-0 | docker load" I0221 09:13:46.159665 488103 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.23.5-rc.0" needs transfer: "k8s.gcr.io/kube-apiserver:v1.23.5-rc.0" does not exist at hash "21a6abb196d761b99a1c0080082127daf45c7ea5429bb08972caeefea3131e87" in container runtime I0221 09:13:46.159709 488103 docker.go:287] Removing image: k8s.gcr.io/kube-apiserver:v1.23.5-rc.0 I0221 09:13:46.159742 488103 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.23.5-rc.0" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.23.5-rc.0" does not exist at hash "771d3886391c929e2b3b1722f9e55ef67fa8f48c043395cfca70c5ce56ae0394" in container runtime I0221 09:13:46.159773 488103 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.23.5-rc.0" needs transfer: "k8s.gcr.io/kube-scheduler:v1.23.5-rc.0" does not exist at hash "636768fbf314dcc4d0872d883b2a329d6de08f4742c73243a3552583533b2624" in container runtime I0221 09:13:46.159795 488103 docker.go:287] Removing image: k8s.gcr.io/kube-controller-manager:v1.23.5-rc.0 I0221 09:13:46.159799 488103 docker.go:287] Removing image: k8s.gcr.io/kube-scheduler:v1.23.5-rc.0 I0221 09:13:46.159832 488103 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-scheduler:v1.23.5-rc.0 I0221 09:13:46.159833 488103 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-controller-manager:v1.23.5-rc.0 I0221 09:13:46.159747 488103 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-apiserver:v1.23.5-rc.0 I0221 09:13:46.240008 488103 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.23.5-rc.0 I0221 09:13:46.425062 488103 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} docker.io/kubernetesui/dashboard:v2.3.1 I0221 09:13:46.430339 488103 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} docker.io/kubernetesui/metrics-scraper:v1.0.7 I0221 09:13:49.867661 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:13:52.367194 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:13:49.947328 488103 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.1-0 | docker load": (3.787696555s) I0221 09:13:49.947358 488103 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 from cache I0221 09:13:49.947432 488103 ssh_runner.go:235] Completed: docker rmi k8s.gcr.io/kube-scheduler:v1.23.5-rc.0: (3.787585932s) I0221 09:13:49.947470 488103 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.5-rc.0 I0221 09:13:49.947482 488103 ssh_runner.go:235] Completed: docker rmi k8s.gcr.io/kube-apiserver:v1.23.5-rc.0: (3.787519775s) I0221 09:13:49.947533 488103 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.5-rc.0 I0221 09:13:49.947538 488103 ssh_runner.go:235] Completed: docker rmi k8s.gcr.io/kube-controller-manager:v1.23.5-rc.0: (3.787605164s) I0221 09:13:49.947560 488103 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.5-rc.0 I0221 09:13:49.947585 488103 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.23.5-rc.0 I0221 09:13:49.947595 488103 ssh_runner.go:235] Completed: docker image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.23.5-rc.0: (3.707559534s) I0221 09:13:49.947609 488103 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.23.5-rc.0 I0221 09:13:49.947633 488103 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.23.5-rc.0" needs transfer: "k8s.gcr.io/kube-proxy:v1.23.5-rc.0" does not exist at hash "0c96fa04944904630c8121480edb68b27f40bb389158c4a70db6ef21acf559a2" in container runtime I0221 09:13:49.947665 488103 ssh_runner.go:235] Completed: docker image inspect --format {{.Id}} docker.io/kubernetesui/dashboard:v2.3.1: (3.522572449s) I0221 09:13:49.947697 488103 docker.go:287] Removing image: k8s.gcr.io/kube-proxy:v1.23.5-rc.0 I0221 09:13:49.947702 488103 cache_images.go:116] "docker.io/kubernetesui/dashboard:v2.3.1" needs transfer: "docker.io/kubernetesui/dashboard:v2.3.1" does not exist at hash "e1482a24335a6e76d438ae175f79409004588570d3e5dbb4c8140e025e848570" in container runtime I0221 09:13:49.947706 488103 ssh_runner.go:235] Completed: docker image inspect --format {{.Id}} docker.io/kubernetesui/metrics-scraper:v1.0.7: (3.517339721s) I0221 09:13:49.947636 488103 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.23.5-rc.0 I0221 09:13:49.947730 488103 docker.go:287] Removing image: docker.io/kubernetesui/dashboard:v2.3.1 I0221 09:13:49.947738 488103 cache_images.go:116] "docker.io/kubernetesui/metrics-scraper:v1.0.7" needs transfer: "docker.io/kubernetesui/metrics-scraper:v1.0.7" does not exist at hash "7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9" in container runtime I0221 09:13:49.947762 488103 docker.go:287] Removing image: docker.io/kubernetesui/metrics-scraper:v1.0.7 I0221 09:13:49.947786 488103 ssh_runner.go:195] Run: docker rmi docker.io/kubernetesui/metrics-scraper:v1.0.7 I0221 09:13:49.947762 488103 ssh_runner.go:195] Run: docker rmi docker.io/kubernetesui/dashboard:v2.3.1 I0221 09:13:49.947734 488103 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-proxy:v1.23.5-rc.0 I0221 09:13:49.952449 488103 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.23.5-rc.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.23.5-rc.0: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.23.5-rc.0': No such file or directory I0221 09:13:49.952490 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.5-rc.0 --> /var/lib/minikube/images/kube-scheduler_v1.23.5-rc.0 (15133184 bytes) I0221 09:13:50.022902 488103 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.5-rc.0 I0221 09:13:50.023046 488103 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.23.5-rc.0 I0221 09:13:50.023364 488103 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7 I0221 09:13:50.023457 488103 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/metrics-scraper_v1.0.7 I0221 09:13:50.023768 488103 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.23.5-rc.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.23.5-rc.0: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.23.5-rc.0': No such file or directory I0221 09:13:50.023793 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.5-rc.0 --> /var/lib/minikube/images/kube-controller-manager_v1.23.5-rc.0 (30170624 bytes) I0221 09:13:50.023818 488103 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1 I0221 09:13:50.023897 488103 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/dashboard_v2.3.1 I0221 09:13:50.023898 488103 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.23.5-rc.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.23.5-rc.0: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.23.5-rc.0': No such file or directory I0221 09:13:50.023944 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.5-rc.0 --> /var/lib/minikube/images/kube-apiserver_v1.23.5-rc.0 (32601088 bytes) I0221 09:13:50.032311 488103 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.23.5-rc.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.23.5-rc.0: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.23.5-rc.0': No such file or directory I0221 09:13:50.032343 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.5-rc.0 --> /var/lib/minikube/images/kube-proxy_v1.23.5-rc.0 (39278080 bytes) I0221 09:13:50.036680 488103 ssh_runner.go:352] existence check for /var/lib/minikube/images/metrics-scraper_v1.0.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/metrics-scraper_v1.0.7: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/metrics-scraper_v1.0.7': No such file or directory I0221 09:13:50.036724 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7 --> /var/lib/minikube/images/metrics-scraper_v1.0.7 (15031296 bytes) I0221 09:13:50.037237 488103 ssh_runner.go:352] existence check for /var/lib/minikube/images/dashboard_v2.3.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/dashboard_v2.3.1: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/dashboard_v2.3.1': No such file or directory I0221 09:13:50.037262 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1 --> /var/lib/minikube/images/dashboard_v2.3.1 (66936320 bytes) I0221 09:13:50.102459 488103 docker.go:254] Loading image: /var/lib/minikube/images/kube-scheduler_v1.23.5-rc.0 I0221 09:13:50.102496 488103 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.23.5-rc.0 | docker load" I0221 09:13:51.428506 488103 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.23.5-rc.0 | docker load": (1.325995772s) I0221 09:13:51.428534 488103 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.5-rc.0 from cache I0221 09:13:51.428574 488103 docker.go:254] Loading image: /var/lib/minikube/images/metrics-scraper_v1.0.7 I0221 09:13:51.428607 488103 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/metrics-scraper_v1.0.7 | docker load" I0221 09:13:51.945013 488103 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7 from cache I0221 09:13:51.945061 488103 docker.go:254] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.23.5-rc.0 I0221 09:13:51.945077 488103 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.23.5-rc.0 | docker load" I0221 09:13:53.211163 488103 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.23.5-rc.0 | docker load": (1.266069019s) I0221 09:13:53.211193 488103 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.5-rc.0 from cache I0221 09:13:53.211220 488103 docker.go:254] Loading image: /var/lib/minikube/images/kube-apiserver_v1.23.5-rc.0 I0221 09:13:53.211239 488103 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.23.5-rc.0 | docker load" I0221 09:13:54.367772 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:13:56.867139 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:13:54.537545 488103 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.23.5-rc.0 | docker load": (1.326284586s) I0221 09:13:54.537574 488103 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.5-rc.0 from cache I0221 09:13:54.537609 488103 docker.go:254] Loading image: /var/lib/minikube/images/kube-proxy_v1.23.5-rc.0 I0221 09:13:54.537650 488103 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.23.5-rc.0 | docker load" I0221 09:13:56.339616 488103 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.23.5-rc.0 | docker load": (1.801945857s) I0221 09:13:56.339657 488103 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.5-rc.0 from cache I0221 09:13:56.339683 488103 docker.go:254] Loading image: /var/lib/minikube/images/dashboard_v2.3.1 I0221 09:13:56.339699 488103 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/dashboard_v2.3.1 | docker load" I0221 09:13:59.419397 488103 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/dashboard_v2.3.1 | docker load": (3.079678516s) I0221 09:13:59.419426 488103 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1 from cache I0221 09:13:59.419452 488103 cache_images.go:123] Successfully loaded all cached images I0221 09:13:59.419461 488103 cache_images.go:92] LoadImages completed in 15.022026603s I0221 09:13:59.419520 488103 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0221 09:13:59.514884 488103 cni.go:93] Creating CNI manager for "" I0221 09:13:59.514919 488103 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0221 09:13:59.514931 488103 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0221 09:13:59.514946 488103 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.5-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20220221091339-6550 NodeName:no-preload-20220221091339-6550 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0221 09:13:59.515113 488103 kubeadm.go:162] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.67.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "no-preload-20220221091339-6550" kubeletExtraArgs: node-ip: 192.168.67.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.67.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.23.5-rc.0 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0221 09:13:59.515204 488103 kubeadm.go:936] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.23.5-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=no-preload-20220221091339-6550 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 [Install] config: {KubernetesVersion:v1.23.5-rc.0 ClusterName:no-preload-20220221091339-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0221 09:13:59.515266 488103 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5-rc.0 I0221 09:13:59.523194 488103 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.23.5-rc.0: Process exited with status 2 stdout: stderr: ls: cannot access '/var/lib/minikube/binaries/v1.23.5-rc.0': No such file or directory Initiating transfer... I0221 09:13:59.523273 488103 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.23.5-rc.0 I0221 09:13:59.530874 488103 binary.go:76] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.23.5-rc.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.23.5-rc.0/bin/linux/amd64/kubectl.sha256 I0221 09:13:59.530934 488103 binary.go:76] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.23.5-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.23.5-rc.0/bin/linux/amd64/kubeadm.sha256 I0221 09:13:59.530958 488103 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl I0221 09:13:59.530962 488103 binary.go:76] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.23.5-rc.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.23.5-rc.0/bin/linux/amd64/kubelet.sha256 I0221 09:13:59.531041 488103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0221 09:13:59.531044 488103 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.23.5-rc.0/kubeadm I0221 09:13:59.535091 488103 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.23.5-rc.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.23.5-rc.0/kubeadm: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/binaries/v1.23.5-rc.0/kubeadm': No such file or directory I0221 09:13:59.535122 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/linux/amd64/v1.23.5-rc.0/kubeadm --> /var/lib/minikube/binaries/v1.23.5-rc.0/kubeadm (45211648 bytes) I0221 09:13:59.535208 488103 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/binaries/v1.23.5-rc.0/kubectl': No such file or directory I0221 09:13:59.535224 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/linux/amd64/v1.23.5-rc.0/kubectl --> /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl (46592000 bytes) I0221 09:13:59.544514 488103 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.23.5-rc.0/kubelet I0221 09:13:59.569193 488103 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.23.5-rc.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.23.5-rc.0/kubelet: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/binaries/v1.23.5-rc.0/kubelet': No such file or directory I0221 09:13:59.569244 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/linux/amd64/v1.23.5-rc.0/kubelet --> /var/lib/minikube/binaries/v1.23.5-rc.0/kubelet (124521440 bytes) I0221 09:13:59.918061 488103 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0221 09:13:59.925503 488103 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes) I0221 09:13:59.938813 488103 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes) I0221 09:13:59.952402 488103 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2057 bytes) I0221 09:13:59.965750 488103 ssh_runner.go:195] Run: grep 192.168.67.2 control-plane.minikube.internal$ /etc/hosts I0221 09:13:59.969023 488103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 09:13:59.978856 488103 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550 for IP: 192.168.67.2 I0221 09:13:59.978948 488103 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key I0221 09:13:59.978986 488103 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key I0221 09:13:59.979078 488103 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/client.key I0221 09:13:59.979093 488103 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/client.crt with IP's: [] I0221 09:14:00.260104 488103 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/client.crt ... I0221 09:14:00.260139 488103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/client.crt: {Name:mkb5c776f53657ebf89941d4ae75e7cd4fd1ecf8 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:14:00.260337 488103 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/client.key ... I0221 09:14:00.260352 488103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/client.key: {Name:mk807ccea67a72008f91e196b40cec5e28bc0ec8 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:14:00.260440 488103 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/apiserver.key.c7fa3a9e I0221 09:14:00.260459 488103 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1] I0221 09:14:00.450652 488103 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/apiserver.crt.c7fa3a9e ... I0221 09:14:00.450683 488103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/apiserver.crt.c7fa3a9e: {Name:mkc5ca2d1641ff622ad9bb5e15df0cf696413945 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:14:00.450852 488103 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/apiserver.key.c7fa3a9e ... I0221 09:14:00.450865 488103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/apiserver.key.c7fa3a9e: {Name:mkadc0c64031cb8715bb9eacd0c1e62e0d48b84a Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:14:00.450941 488103 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/apiserver.crt I0221 09:14:00.451020 488103 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/apiserver.key I0221 09:14:00.451088 488103 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/proxy-client.key I0221 09:14:00.451105 488103 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/proxy-client.crt with IP's: [] I0221 09:14:00.557304 488103 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/proxy-client.crt ... I0221 09:14:00.557333 488103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/proxy-client.crt: {Name:mk3c5a592e554d32f2143385c9ad234b8e698ea5 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:14:00.557524 488103 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/proxy-client.key ... I0221 09:14:00.557537 488103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/proxy-client.key: {Name:mk417036b97dde6cbbab80a20c937b065beed3ea Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:14:00.557683 488103 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem (1338 bytes) W0221 09:14:00.557722 488103 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550_empty.pem, impossibly tiny 0 bytes I0221 09:14:00.557733 488103 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem (1679 bytes) I0221 09:14:00.557757 488103 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem (1082 bytes) I0221 09:14:00.557784 488103 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem (1123 bytes) I0221 09:14:00.557808 488103 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem (1675 bytes) I0221 09:14:00.557847 488103 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem (1708 bytes) I0221 09:14:00.558683 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0221 09:14:00.577905 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0221 09:14:00.596684 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0221 09:14:00.615566 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I0221 09:14:00.633972 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0221 09:14:00.653103 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0221 09:14:00.671588 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0221 09:14:00.690166 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0221 09:14:00.709166 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem --> /usr/share/ca-certificates/6550.pem (1338 bytes) I0221 09:14:00.727776 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /usr/share/ca-certificates/65502.pem (1708 bytes) I0221 09:14:00.746130 488103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0221 09:14:00.764312 488103 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0221 09:14:00.777756 488103 ssh_runner.go:195] Run: openssl version I0221 09:14:00.783035 488103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65502.pem && ln -fs /usr/share/ca-certificates/65502.pem /etc/ssl/certs/65502.pem" I0221 09:14:00.791042 488103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65502.pem I0221 09:14:00.794707 488103 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Feb 21 08:30 /usr/share/ca-certificates/65502.pem I0221 09:14:00.794749 488103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65502.pem I0221 09:14:00.800080 488103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65502.pem /etc/ssl/certs/3ec20f2e.0" I0221 09:14:00.809600 488103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0221 09:14:00.818126 488103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0221 09:14:00.821632 488103 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb 21 08:26 /usr/share/ca-certificates/minikubeCA.pem I0221 09:14:00.821676 488103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0221 09:14:00.827026 488103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0221 09:14:00.835075 488103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6550.pem && ln -fs /usr/share/ca-certificates/6550.pem /etc/ssl/certs/6550.pem" I0221 09:14:00.843086 488103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6550.pem I0221 09:14:00.846473 488103 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Feb 21 08:30 /usr/share/ca-certificates/6550.pem I0221 09:14:00.846524 488103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6550.pem I0221 09:14:00.851694 488103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6550.pem /etc/ssl/certs/51391683.0" I0221 09:14:00.859910 488103 kubeadm.go:391] StartCluster: {Name:no-preload-20220221091339-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5-rc.0 ClusterName:no-preload-20220221091339-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.5-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 09:14:00.860021 488103 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0221 09:14:00.893585 488103 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0221 09:14:00.901588 488103 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0221 09:14:00.909236 488103 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver I0221 09:14:00.909302 488103 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0221 09:14:00.916816 488103 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0221 09:14:00.916858 488103 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0221 09:13:59.367181 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:14:01.867767 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:14:01.429678 488103 out.go:203] - Generating certificates and keys ... I0221 09:14:03.820433 488103 out.go:203] - Booting up control plane ... I0221 09:14:04.367135 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:14:06.904036 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:14:08.974172 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:14:11.366696 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:14:13.366899 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:14:15.867420 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:14:18.361662 488103 out.go:203] - Configuring RBAC rules ... I0221 09:14:18.813515 488103 cni.go:93] Creating CNI manager for "" I0221 09:14:18.813542 488103 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0221 09:14:18.813571 488103 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0221 09:14:18.813719 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:18.813796 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=9a4c39b14752448ee8b5da7f8bb397c2a16d9ea9 minikube.k8s.io/name=no-preload-20220221091339-6550 minikube.k8s.io/updated_at=2022_02_21T09_14_18_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:19.212122 488103 ops.go:34] apiserver oom_adj: -16 I0221 09:14:19.212232 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:18.368282 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:14:20.866820 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:14:22.867214 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:14:19.768205 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:20.267993 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:20.768304 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:21.268220 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:21.767700 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:22.268206 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:22.767962 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:23.268219 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:23.768450 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:24.268208 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:25.366985 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:14:27.868840 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:14:24.768176 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:25.268252 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:25.767611 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:26.267740 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:26.768207 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:27.268241 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:27.768173 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:28.268054 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:28.768008 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:29.267723 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:29.768242 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:30.268495 488103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:14:30.326062 488103 kubeadm.go:1020] duration metric: took 11.512405074s to wait for elevateKubeSystemPrivileges. I0221 09:14:30.326096 488103 kubeadm.go:393] StartCluster complete in 29.466192667s I0221 09:14:30.326117 488103 settings.go:142] acquiring lock: {Name:mk4400923ef35d7d80e21aa000bc7683aef0fb09 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:14:30.326239 488103 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 09:14:30.328631 488103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig: {Name:mkbaf7b3f4ffbe8a9d57707d423380523fa909e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:14:30.847109 488103 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220221091339-6550" rescaled to 1 I0221 09:14:30.847168 488103 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.5-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true} I0221 09:14:30.849044 488103 out.go:176] * Verifying Kubernetes components... I0221 09:14:30.847213 488103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0221 09:14:30.849093 488103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0221 09:14:30.847248 488103 addons.go:415] enableAddons start: toEnable=map[], additional=[] I0221 09:14:30.849158 488103 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220221091339-6550" I0221 09:14:30.847450 488103 config.go:176] Loaded profile config "no-preload-20220221091339-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5-rc.0 I0221 09:14:30.849178 488103 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220221091339-6550" W0221 09:14:30.849187 488103 addons.go:165] addon storage-provisioner should already be in state true I0221 09:14:30.849205 488103 host.go:66] Checking if "no-preload-20220221091339-6550" exists ... I0221 09:14:30.849235 488103 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220221091339-6550" I0221 09:14:30.849258 488103 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220221091339-6550" I0221 09:14:30.849610 488103 cli_runner.go:133] Run: docker container inspect no-preload-20220221091339-6550 --format={{.State.Status}} I0221 09:14:30.849624 488103 cli_runner.go:133] Run: docker container inspect no-preload-20220221091339-6550 --format={{.State.Status}} I0221 09:14:30.893964 488103 out.go:176] - Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0221 09:14:30.894066 488103 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml I0221 09:14:30.894080 488103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0221 09:14:30.894125 488103 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:14:30.896323 488103 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220221091339-6550" W0221 09:14:30.896344 488103 addons.go:165] addon default-storageclass should already be in state true I0221 09:14:30.896364 488103 host.go:66] Checking if "no-preload-20220221091339-6550" exists ... I0221 09:14:30.896685 488103 cli_runner.go:133] Run: docker container inspect no-preload-20220221091339-6550 --format={{.State.Status}} I0221 09:14:30.935800 488103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.67.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0221 09:14:30.938226 488103 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220221091339-6550" to be "Ready" ... I0221 09:14:30.942045 488103 node_ready.go:49] node "no-preload-20220221091339-6550" has status "Ready":"True" I0221 09:14:30.942067 488103 node_ready.go:38] duration metric: took 3.808409ms waiting for node "no-preload-20220221091339-6550" to be "Ready" ... I0221 09:14:30.942078 488103 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 09:14:30.942087 488103 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml I0221 09:14:30.942103 488103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0221 09:14:30.942161 488103 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:14:30.942716 488103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49414 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/no-preload-20220221091339-6550/id_rsa Username:docker} I0221 09:14:30.956536 488103 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20220221091339-6550" in "kube-system" namespace to be "Ready" ... I0221 09:14:30.978427 488103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49414 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/no-preload-20220221091339-6550/id_rsa Username:docker} I0221 09:14:31.007750 488103 pod_ready.go:92] pod "etcd-no-preload-20220221091339-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:14:31.007779 488103 pod_ready.go:81] duration metric: took 51.208167ms waiting for pod "etcd-no-preload-20220221091339-6550" in "kube-system" namespace to be "Ready" ... I0221 09:14:31.007794 488103 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20220221091339-6550" in "kube-system" namespace to be "Ready" ... I0221 09:14:31.016037 488103 pod_ready.go:92] pod "kube-apiserver-no-preload-20220221091339-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:14:31.016073 488103 pod_ready.go:81] duration metric: took 8.267725ms waiting for pod "kube-apiserver-no-preload-20220221091339-6550" in "kube-system" namespace to be "Ready" ... I0221 09:14:31.016086 488103 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20220221091339-6550" in "kube-system" namespace to be "Ready" ... I0221 09:14:31.023725 488103 pod_ready.go:92] pod "kube-controller-manager-no-preload-20220221091339-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:14:31.023749 488103 pod_ready.go:81] duration metric: took 7.654894ms waiting for pod "kube-controller-manager-no-preload-20220221091339-6550" in "kube-system" namespace to be "Ready" ... I0221 09:14:31.023763 488103 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hlrh9" in "kube-system" namespace to be "Ready" ... I0221 09:14:31.224660 488103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0221 09:14:31.325704 488103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0221 09:14:32.539312 488103 pod_ready.go:92] pod "kube-proxy-hlrh9" in "kube-system" namespace has status "Ready":"True" I0221 09:14:32.539346 488103 pod_ready.go:81] duration metric: took 1.515575512s waiting for pod "kube-proxy-hlrh9" in "kube-system" namespace to be "Ready" ... I0221 09:14:32.539355 488103 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20220221091339-6550" in "kube-system" namespace to be "Ready" ... I0221 09:14:32.543713 488103 pod_ready.go:92] pod "kube-scheduler-no-preload-20220221091339-6550" in "kube-system" namespace has status "Ready":"True" I0221 09:14:32.543732 488103 pod_ready.go:81] duration metric: took 4.370791ms waiting for pod "kube-scheduler-no-preload-20220221091339-6550" in "kube-system" namespace to be "Ready" ... I0221 09:14:32.543739 488103 pod_ready.go:38] duration metric: took 1.601647944s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 09:14:32.543759 488103 api_server.go:51] waiting for apiserver process to appear ... I0221 09:14:32.543799 488103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0221 09:14:32.908548 488103 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.67.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.972707233s) I0221 09:14:32.908586 488103 start.go:777] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS I0221 09:14:32.921199 488103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.696493245s) I0221 09:14:32.940497 488103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.614757287s) I0221 09:14:32.940546 488103 api_server.go:71] duration metric: took 2.093358936s to wait for apiserver process to appear ... I0221 09:14:32.940562 488103 api_server.go:87] waiting for apiserver healthz status ... I0221 09:14:32.940606 488103 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ... I0221 09:14:30.366555 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:14:32.367235 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:14:32.942965 488103 out.go:176] * Enabled addons: default-storageclass, storage-provisioner I0221 09:14:32.943036 488103 addons.go:417] enableAddons completed in 2.095803648s I0221 09:14:32.946520 488103 api_server.go:266] https://192.168.67.2:8443/healthz returned 200: ok I0221 09:14:32.947385 488103 api_server.go:140] control plane version: v1.23.5-rc.0 I0221 09:14:32.947406 488103 api_server.go:130] duration metric: took 6.806136ms to wait for apiserver health ... I0221 09:14:32.947416 488103 system_pods.go:43] waiting for kube-system pods to appear ... I0221 09:14:33.005949 488103 system_pods.go:59] 8 kube-system pods found I0221 09:14:33.006016 488103 system_pods.go:61] "coredns-64897985d-cq4vt" [e8522b25-8c41-46a0-8d94-3f70aff8fc0d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:14:33.006034 488103 system_pods.go:61] "coredns-64897985d-t6lcp" [f53edf0b-bb68-4aa6-83d9-8e1a356dfda9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:14:33.006049 488103 system_pods.go:61] "etcd-no-preload-20220221091339-6550" [3fa24a94-f41c-4cf8-bcb9-4808033aef36] Running I0221 09:14:33.006058 488103 system_pods.go:61] "kube-apiserver-no-preload-20220221091339-6550" [e30a563f-2bc9-4894-9a2d-f87bdb6c96be] Running I0221 09:14:33.006070 488103 system_pods.go:61] "kube-controller-manager-no-preload-20220221091339-6550" [7300b4d8-82d8-49ca-b581-7e909cb1917e] Running I0221 09:14:33.006075 488103 system_pods.go:61] "kube-proxy-hlrh9" [751f88ad-01cb-49a0-947b-a4213748c80e] Running I0221 09:14:33.006080 488103 system_pods.go:61] "kube-scheduler-no-preload-20220221091339-6550" [fbffbf1c-d65f-4cbe-bea6-80bc5da28f8a] Running I0221 09:14:33.006091 488103 system_pods.go:61] "storage-provisioner" [6312fa75-6c0a-4240-b8cf-9dacbb061fa7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:14:33.006102 488103 system_pods.go:74] duration metric: took 58.679811ms to wait for pod list to return data ... I0221 09:14:33.006112 488103 default_sa.go:34] waiting for default service account to be created ... I0221 09:14:33.008987 488103 default_sa.go:45] found service account: "default" I0221 09:14:33.009019 488103 default_sa.go:55] duration metric: took 2.900219ms for default service account to be created ... I0221 09:14:33.009028 488103 system_pods.go:116] waiting for k8s-apps to be running ... I0221 09:14:33.143846 488103 system_pods.go:86] 8 kube-system pods found I0221 09:14:33.143877 488103 system_pods.go:89] "coredns-64897985d-cq4vt" [e8522b25-8c41-46a0-8d94-3f70aff8fc0d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:14:33.143884 488103 system_pods.go:89] "coredns-64897985d-t6lcp" [f53edf0b-bb68-4aa6-83d9-8e1a356dfda9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:14:33.143889 488103 system_pods.go:89] "etcd-no-preload-20220221091339-6550" [3fa24a94-f41c-4cf8-bcb9-4808033aef36] Running I0221 09:14:33.143899 488103 system_pods.go:89] "kube-apiserver-no-preload-20220221091339-6550" [e30a563f-2bc9-4894-9a2d-f87bdb6c96be] Running I0221 09:14:33.143906 488103 system_pods.go:89] "kube-controller-manager-no-preload-20220221091339-6550" [7300b4d8-82d8-49ca-b581-7e909cb1917e] Running I0221 09:14:33.143916 488103 system_pods.go:89] "kube-proxy-hlrh9" [751f88ad-01cb-49a0-947b-a4213748c80e] Running I0221 09:14:33.143923 488103 system_pods.go:89] "kube-scheduler-no-preload-20220221091339-6550" [fbffbf1c-d65f-4cbe-bea6-80bc5da28f8a] Running I0221 09:14:33.143939 488103 system_pods.go:89] "storage-provisioner" [6312fa75-6c0a-4240-b8cf-9dacbb061fa7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:14:33.143977 488103 retry.go:31] will retry after 263.082536ms: missing components: kube-dns I0221 09:14:33.412618 488103 system_pods.go:86] 8 kube-system pods found I0221 09:14:33.412650 488103 system_pods.go:89] "coredns-64897985d-cq4vt" [e8522b25-8c41-46a0-8d94-3f70aff8fc0d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:14:33.412658 488103 system_pods.go:89] "coredns-64897985d-t6lcp" [f53edf0b-bb68-4aa6-83d9-8e1a356dfda9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:14:33.412664 488103 system_pods.go:89] "etcd-no-preload-20220221091339-6550" [3fa24a94-f41c-4cf8-bcb9-4808033aef36] Running I0221 09:14:33.412670 488103 system_pods.go:89] "kube-apiserver-no-preload-20220221091339-6550" [e30a563f-2bc9-4894-9a2d-f87bdb6c96be] Running I0221 09:14:33.412677 488103 system_pods.go:89] "kube-controller-manager-no-preload-20220221091339-6550" [7300b4d8-82d8-49ca-b581-7e909cb1917e] Running I0221 09:14:33.412683 488103 system_pods.go:89] "kube-proxy-hlrh9" [751f88ad-01cb-49a0-947b-a4213748c80e] Running I0221 09:14:33.412689 488103 system_pods.go:89] "kube-scheduler-no-preload-20220221091339-6550" [fbffbf1c-d65f-4cbe-bea6-80bc5da28f8a] Running I0221 09:14:33.412702 488103 system_pods.go:89] "storage-provisioner" [6312fa75-6c0a-4240-b8cf-9dacbb061fa7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:14:33.412722 488103 retry.go:31] will retry after 381.329545ms: missing components: kube-dns I0221 09:14:33.799233 488103 system_pods.go:86] 8 kube-system pods found I0221 09:14:33.799261 488103 system_pods.go:89] "coredns-64897985d-cq4vt" [e8522b25-8c41-46a0-8d94-3f70aff8fc0d] Running I0221 09:14:33.799271 488103 system_pods.go:89] "coredns-64897985d-t6lcp" [f53edf0b-bb68-4aa6-83d9-8e1a356dfda9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:14:33.799277 488103 system_pods.go:89] "etcd-no-preload-20220221091339-6550" [3fa24a94-f41c-4cf8-bcb9-4808033aef36] Running I0221 09:14:33.799282 488103 system_pods.go:89] "kube-apiserver-no-preload-20220221091339-6550" [e30a563f-2bc9-4894-9a2d-f87bdb6c96be] Running I0221 09:14:33.799286 488103 system_pods.go:89] "kube-controller-manager-no-preload-20220221091339-6550" [7300b4d8-82d8-49ca-b581-7e909cb1917e] Running I0221 09:14:33.799290 488103 system_pods.go:89] "kube-proxy-hlrh9" [751f88ad-01cb-49a0-947b-a4213748c80e] Running I0221 09:14:33.799296 488103 system_pods.go:89] "kube-scheduler-no-preload-20220221091339-6550" [fbffbf1c-d65f-4cbe-bea6-80bc5da28f8a] Running I0221 09:14:33.799301 488103 system_pods.go:89] "storage-provisioner" [6312fa75-6c0a-4240-b8cf-9dacbb061fa7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0221 09:14:33.799307 488103 system_pods.go:126] duration metric: took 790.274513ms to wait for k8s-apps to be running ... I0221 09:14:33.799318 488103 system_svc.go:44] waiting for kubelet service to be running .... I0221 09:14:33.799356 488103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0221 09:14:33.809667 488103 system_svc.go:56] duration metric: took 10.340697ms WaitForService to wait for kubelet. I0221 09:14:33.809696 488103 kubeadm.go:548] duration metric: took 2.962508753s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ... I0221 09:14:33.809717 488103 node_conditions.go:102] verifying NodePressure condition ... I0221 09:14:33.812958 488103 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki I0221 09:14:33.812985 488103 node_conditions.go:123] node cpu capacity is 8 I0221 09:14:33.812996 488103 node_conditions.go:105] duration metric: took 3.275185ms to run NodePressure ... I0221 09:14:33.813006 488103 start.go:213] waiting for startup goroutines ... I0221 09:14:33.847305 488103 start.go:496] kubectl: 1.23.4, cluster: 1.23.5-rc.0 (minor skew: 0) I0221 09:14:33.849970 488103 out.go:176] * Done! kubectl is now configured to use "no-preload-20220221091339-6550" cluster and "default" namespace by default I0221 09:14:34.866648 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:14:37.367571 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" * * ==> Docker <== * -- Logs begin at Mon 2022-02-21 09:03:40 UTC, end at Mon 2022-02-21 09:14:39 UTC. -- Feb 21 09:03:42 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:03:42.303912260Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 21 09:03:42 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:03:42.303942802Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 21 09:03:42 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:03:42.303968007Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Feb 21 09:03:42 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:03:42.303982652Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 21 09:03:42 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:03:42.308479245Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Feb 21 09:03:42 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:03:42.314490990Z" level=warning msg="Your kernel does not support CPU realtime scheduler" Feb 21 09:03:42 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:03:42.314522260Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 21 09:03:42 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:03:42.314528141Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 21 09:03:42 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:03:42.314672962Z" level=info msg="Loading containers: start." Feb 21 09:03:42 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:03:42.397445175Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 21 09:03:42 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:03:42.432371517Z" level=info msg="Loading containers: done." Feb 21 09:03:42 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:03:42.443993121Z" level=info msg="Docker daemon" commit=459d0df graphdriver(s)=overlay2 version=20.10.12 Feb 21 09:03:42 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:03:42.444056839Z" level=info msg="Daemon has completed initialization" Feb 21 09:03:42 enable-default-cni-20220221084933-6550 systemd[1]: Started Docker Application Container Engine. Feb 21 09:03:42 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:03:42.465398038Z" level=info msg="API listen on [::]:2376" Feb 21 09:03:42 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:03:42.472253118Z" level=info msg="API listen on /var/run/docker.sock" Feb 21 09:04:25 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:04:25.238038406Z" level=info msg="ignoring event" container=5afd280d6ca1170ae488a5b552e3a1a019ffc651badfdefae21cf38b5344b4fe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:04:25 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:04:25.358291189Z" level=info msg="ignoring event" container=dd88f9a2c29fd0d324bb0cc243731be6f6ad977286b27b60b3c81c02bc5112e3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:04:46 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:04:46.865768077Z" level=info msg="ignoring event" container=72640011ea4692f842704d801b1fd6c5cdec01b158a87acedaada04ba21cbd58 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:05:17 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:05:17.070485776Z" level=info msg="ignoring event" container=5753550452bdc181a9f3a1b4bde53fcd818a97bac42202af2c5ab08a1b8eaf9b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:05:59 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:05:59.550940087Z" level=info msg="ignoring event" container=767d8f72b700525bc491176eb71e4e18f6edca1bc1fb1d91fdfbf8869232cde6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:06:58 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:06:58.545862700Z" level=info msg="ignoring event" container=232a60522c9e23285ef5a7fb7ded9674b1e879db7e9c46118658cf028dfc1f96 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:08:12 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:08:12.567345086Z" level=info msg="ignoring event" container=987fc4d25f59800358d2084952b6585242449693072bbdbe977e71cecd1ad391 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:10:15 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:10:15.566326284Z" level=info msg="ignoring event" container=1a6d835452a795794468d7dbb811bb2e752edb68fe42ac0215cf9efd022eaf85 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:13:32 enable-default-cni-20220221084933-6550 dockerd[458]: time="2022-02-21T09:13:32.546206381Z" level=info msg="ignoring event" container=7e5453a7bbe4ab760a7f1e881bf9571a0c768135e214c50ab0e1fd6caa3623cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 7e5453a7bbe4a 6e38f40d628db About a minute ago Exited storage-provisioner 6 cfeaa1bfff01b fcb59e0ee67e6 k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 6 minutes ago Running dnsutils 0 bdea8f4c61ad5 3eab59e55df1e a4ca41631cc7a 10 minutes ago Running coredns 0 b44b7a1956d4f b198c3fa15580 2114245ec4d6b 10 minutes ago Running kube-proxy 0 79ea26ef70591 6e0b11913ead7 aceacb6244f9f 10 minutes ago Running kube-scheduler 0 7482f2936a907 22f36e8efd018 62930710c9634 10 minutes ago Running kube-apiserver 0 996ed6b04f1a3 2d52356b4d441 25f8c7f3da61c 10 minutes ago Running etcd 0 33b91e247ac96 9da67fbcae637 25444908517a5 10 minutes ago Running kube-controller-manager 0 b61f3663b8c4d * * ==> coredns [3eab59e55df1] <== * [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" * * ==> describe nodes <== * Name: enable-default-cni-20220221084933-6550 Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=enable-default-cni-20220221084933-6550 kubernetes.io/os=linux minikube.k8s.io/commit=9a4c39b14752448ee8b5da7f8bb397c2a16d9ea9 minikube.k8s.io/name=enable-default-cni-20220221084933-6550 minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2022_02_21T09_04_01_0700 minikube.k8s.io/version=v1.25.1 node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 21 Feb 2022 09:03:54 +0000 Taints: Unschedulable: false Lease: HolderIdentity: enable-default-cni-20220221084933-6550 AcquireTime: RenewTime: Mon, 21 Feb 2022 09:14:35 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Mon, 21 Feb 2022 09:14:13 +0000 Mon, 21 Feb 2022 09:03:53 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Mon, 21 Feb 2022 09:14:13 +0000 Mon, 21 Feb 2022 09:03:53 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Mon, 21 Feb 2022 09:14:13 +0000 Mon, 21 Feb 2022 09:03:53 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Mon, 21 Feb 2022 09:14:13 +0000 Mon, 21 Feb 2022 09:04:11 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.58.2 Hostname: enable-default-cni-20220221084933-6550 Capacity: cpu: 8 ephemeral-storage: 304695084Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32874648Ki pods: 110 Allocatable: cpu: 8 ephemeral-storage: 304695084Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32874648Ki pods: 110 System Info: Machine ID: b6a262faae404a5db719705fd34b5c8b System UUID: 88a905a7-4360-4926-9a27-46e272953df7 Boot ID: 36f9c729-2a96-4807-bb74-314dc2113999 Kernel Version: 5.11.0-1029-gcp OS Image: Ubuntu 20.04.2 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.12 Kubelet Version: v1.23.4 Kube-Proxy Version: v1.23.4 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (8 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- default netcat-668db85669-fm848 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m12s kube-system coredns-64897985d-mr75l 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 10m kube-system etcd-enable-default-cni-20220221084933-6550 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 10m kube-system kube-apiserver-enable-default-cni-20220221084933-6550 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 10m kube-system kube-controller-manager-enable-default-cni-20220221084933-6550 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 10m kube-system kube-proxy-z67wt 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 10m kube-system kube-scheduler-enable-default-cni-20220221084933-6550 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 10m kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 10m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 750m (9%!)(MISSING) 0 (0%!)(MISSING) memory 170Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 10m kube-proxy Normal Starting 10m kubelet Starting kubelet. Normal NodeHasNoDiskPressure 10m (x4 over 10m) kubelet Node enable-default-cni-20220221084933-6550 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 10m (x4 over 10m) kubelet Node enable-default-cni-20220221084933-6550 status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 10m kubelet Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 10m (x4 over 10m) kubelet Node enable-default-cni-20220221084933-6550 status is now: NodeHasSufficientMemory Normal Starting 10m kubelet Starting kubelet. Normal NodeHasNoDiskPressure 10m kubelet Node enable-default-cni-20220221084933-6550 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 10m kubelet Node enable-default-cni-20220221084933-6550 status is now: NodeHasSufficientPID Normal NodeNotReady 10m kubelet Node enable-default-cni-20220221084933-6550 status is now: NodeNotReady Normal NodeAllocatableEnforced 10m kubelet Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 10m kubelet Node enable-default-cni-20220221084933-6550 status is now: NodeHasSufficientMemory Normal NodeReady 10m kubelet Node enable-default-cni-20220221084933-6550 status is now: NodeReady * * ==> dmesg <== * [ +0.000008] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 [ +2.963841] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 [ +1.035853] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 [ +1.023933] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 [Feb21 09:14] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +1.035516] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +1.019972] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +2.943777] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000008] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +1.027861] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +1.019959] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +2.951870] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +1.015815] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +1.027946] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 * * ==> etcd [2d52356b4d44] <== * {"level":"info","ts":"2022-02-21T09:04:01.007Z","caller":"traceutil/trace.go:171","msg":"trace[1662587402] transaction","detail":"{read_only:false; response_revision:285; number_of_response:1; }","duration":"343.853384ms","start":"2022-02-21T09:04:00.663Z","end":"2022-02-21T09:04:01.007Z","steps":["trace[1662587402] 'process raft request' (duration: 171.634431ms)","trace[1662587402] 'compare' (duration: 171.878651ms)"],"step_count":2} {"level":"warn","ts":"2022-02-21T09:04:01.007Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-02-21T09:04:00.663Z","time spent":"343.918406ms","remote":"127.0.0.1:34878","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":263,"response count":0,"response size":39,"request content":"compare: success:> failure: >"} {"level":"info","ts":"2022-02-21T09:04:01.007Z","caller":"traceutil/trace.go:171","msg":"trace[1430197607] transaction","detail":"{read_only:false; response_revision:287; number_of_response:1; }","duration":"340.353611ms","start":"2022-02-21T09:04:00.667Z","end":"2022-02-21T09:04:01.007Z","steps":["trace[1430197607] 'process raft request' (duration: 340.250258ms)"],"step_count":1} {"level":"info","ts":"2022-02-21T09:04:01.007Z","caller":"traceutil/trace.go:171","msg":"trace[2126245749] transaction","detail":"{read_only:false; response_revision:286; number_of_response:1; }","duration":"343.979445ms","start":"2022-02-21T09:04:00.663Z","end":"2022-02-21T09:04:01.007Z","steps":["trace[2126245749] 'process raft request' (duration: 343.796716ms)"],"step_count":1} {"level":"info","ts":"2022-02-21T09:04:01.007Z","caller":"traceutil/trace.go:171","msg":"trace[1081920735] transaction","detail":"{read_only:false; response_revision:290; number_of_response:1; }","duration":"100.157444ms","start":"2022-02-21T09:04:00.907Z","end":"2022-02-21T09:04:01.007Z","steps":["trace[1081920735] 'process raft request' (duration: 100.127415ms)"],"step_count":1} {"level":"warn","ts":"2022-02-21T09:04:01.007Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-02-21T09:04:00.663Z","time spent":"344.019188ms","remote":"127.0.0.1:34856","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":718,"response count":0,"response size":39,"request content":"compare: success:> failure:<>"} {"level":"info","ts":"2022-02-21T09:04:01.007Z","caller":"traceutil/trace.go:171","msg":"trace[223105609] transaction","detail":"{read_only:false; response_revision:289; number_of_response:1; }","duration":"297.76584ms","start":"2022-02-21T09:04:00.709Z","end":"2022-02-21T09:04:01.007Z","steps":["trace[223105609] 'process raft request' (duration: 297.651356ms)"],"step_count":1} {"level":"info","ts":"2022-02-21T09:04:01.007Z","caller":"traceutil/trace.go:171","msg":"trace[1844690839] transaction","detail":"{read_only:false; response_revision:288; number_of_response:1; }","duration":"297.835735ms","start":"2022-02-21T09:04:00.709Z","end":"2022-02-21T09:04:01.007Z","steps":["trace[1844690839] 'process raft request' (duration: 297.623192ms)"],"step_count":1} {"level":"info","ts":"2022-02-21T09:04:01.007Z","caller":"traceutil/trace.go:171","msg":"trace[1556602659] linearizableReadLoop","detail":"{readStateIndex:297; appliedIndex:291; }","duration":"172.477897ms","start":"2022-02-21T09:04:00.835Z","end":"2022-02-21T09:04:01.007Z","steps":["trace[1556602659] 'read index received' (duration: 171.241323ms)","trace[1556602659] 'applied index is now lower than readState.Index' (duration: 1.235517ms)"],"step_count":2} {"level":"warn","ts":"2022-02-21T09:04:01.007Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-02-21T09:04:00.667Z","time spent":"340.435747ms","remote":"127.0.0.1:34966","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3066,"response count":0,"response size":39,"request content":"compare: success:> failure:<>"} {"level":"info","ts":"2022-02-21T09:04:01.007Z","caller":"traceutil/trace.go:171","msg":"trace[121801330] transaction","detail":"{read_only:false; number_of_response:0; response_revision:287; }","duration":"297.95295ms","start":"2022-02-21T09:04:00.709Z","end":"2022-02-21T09:04:01.007Z","steps":["trace[121801330] 'process raft request' (duration: 297.651332ms)"],"step_count":1} {"level":"warn","ts":"2022-02-21T09:04:01.007Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"344.268949ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:353"} {"level":"warn","ts":"2022-02-21T09:04:01.007Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"298.608104ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-enable-default-cni-20220221084933-6550\" ","response":"range_response_count:1 size:5797"} {"level":"info","ts":"2022-02-21T09:04:01.007Z","caller":"traceutil/trace.go:171","msg":"trace[1271842616] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:290; }","duration":"344.297886ms","start":"2022-02-21T09:04:00.663Z","end":"2022-02-21T09:04:01.007Z","steps":["trace[1271842616] 'agreement among raft nodes before linearized reading' (duration: 344.224087ms)"],"step_count":1} {"level":"info","ts":"2022-02-21T09:04:01.007Z","caller":"traceutil/trace.go:171","msg":"trace[543956102] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-enable-default-cni-20220221084933-6550; range_end:; response_count:1; response_revision:290; }","duration":"298.63611ms","start":"2022-02-21T09:04:00.709Z","end":"2022-02-21T09:04:01.007Z","steps":["trace[543956102] 'agreement among raft nodes before linearized reading' (duration: 298.534406ms)"],"step_count":1} {"level":"warn","ts":"2022-02-21T09:04:01.007Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-02-21T09:04:00.663Z","time spent":"344.328322ms","remote":"127.0.0.1:34870","response type":"/etcdserverpb.KV/Range","request count":0,"request size":34,"response count":1,"response size":376,"request content":"key:\"/registry/namespaces/kube-system\" "} {"level":"info","ts":"2022-02-21T09:09:56.180Z","caller":"traceutil/trace.go:171","msg":"trace[1779109173] transaction","detail":"{read_only:false; response_revision:653; number_of_response:1; }","duration":"235.577473ms","start":"2022-02-21T09:09:55.944Z","end":"2022-02-21T09:09:56.180Z","steps":["trace[1779109173] 'process raft request' (duration: 138.096663ms)","trace[1779109173] 'compare' (duration: 97.35885ms)"],"step_count":2} {"level":"info","ts":"2022-02-21T09:13:52.358Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":635} {"level":"info","ts":"2022-02-21T09:13:52.359Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":635,"took":"674.848µs"} {"level":"warn","ts":"2022-02-21T09:14:06.231Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"182.451204ms","expected-duration":"100ms","prefix":"","request":"header: txn: success:> failure: >>","response":"size:16"} {"level":"info","ts":"2022-02-21T09:14:06.231Z","caller":"traceutil/trace.go:171","msg":"trace[1418751583] transaction","detail":"{read_only:false; response_revision:709; number_of_response:1; }","duration":"271.569792ms","start":"2022-02-21T09:14:05.960Z","end":"2022-02-21T09:14:06.231Z","steps":["trace[1418751583] 'process raft request' (duration: 88.954187ms)","trace[1418751583] 'compare' (duration: 182.355848ms)"],"step_count":2} {"level":"warn","ts":"2022-02-21T09:14:11.373Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"148.076706ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/\" range_end:\"/registry/daemonsets0\" count_only:true ","response":"range_response_count:0 size:7"} {"level":"info","ts":"2022-02-21T09:14:11.373Z","caller":"traceutil/trace.go:171","msg":"trace[2134174816] range","detail":"{range_begin:/registry/daemonsets/; range_end:/registry/daemonsets0; response_count:0; response_revision:709; }","duration":"148.197037ms","start":"2022-02-21T09:14:11.225Z","end":"2022-02-21T09:14:11.373Z","steps":["trace[2134174816] 'agreement among raft nodes before linearized reading' (duration: 55.251314ms)","trace[2134174816] 'count revisions from in-memory index tree' (duration: 92.813553ms)"],"step_count":2} {"level":"warn","ts":"2022-02-21T09:14:11.373Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"165.013832ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/\" range_end:\"/registry/roles0\" count_only:true ","response":"range_response_count:0 size:7"} {"level":"info","ts":"2022-02-21T09:14:11.373Z","caller":"traceutil/trace.go:171","msg":"trace[402769945] range","detail":"{range_begin:/registry/roles/; range_end:/registry/roles0; response_count:0; response_revision:709; }","duration":"165.219559ms","start":"2022-02-21T09:14:11.208Z","end":"2022-02-21T09:14:11.373Z","steps":["trace[402769945] 'agreement among raft nodes before linearized reading' (duration: 72.164621ms)","trace[402769945] 'count revisions from in-memory index tree' (duration: 92.824451ms)"],"step_count":2} * * ==> kernel <== * 09:14:40 up 57 min, 0 users, load average: 1.17, 1.46, 2.35 Linux enable-default-cni-20220221084933-6550 5.11.0-1029-gcp #33~20.04.3-Ubuntu SMP Tue Jan 18 12:03:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.2 LTS" * * ==> kube-apiserver [22f36e8efd01] <== * I0221 09:03:56.068930 1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1] W0221 09:03:56.465277 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2] I0221 09:03:56.466317 1 controller.go:611] quota admission added evaluator for: endpoints I0221 09:03:56.564935 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io I0221 09:03:56.932173 1 controller.go:611] quota admission added evaluator for: serviceaccounts {"level":"warn","ts":"2022-02-21T09:04:00.099Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0012e88c0/#initially=[https://127.0.0.1:2379]","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"} {"level":"warn","ts":"2022-02-21T09:04:00.099Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00136a380/#initially=[https://127.0.0.1:2379]","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"} E0221 09:04:00.099616 1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout E0221 09:04:00.099640 1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout E0221 09:04:00.099691 1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 22.201µs, panicked: false, err: context canceled, panic-reason: E0221 09:04:00.099701 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout E0221 09:04:00.099731 1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 16.253µs, panicked: false, err: context canceled, panic-reason: E0221 09:04:00.100928 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout E0221 09:04:00.102049 1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout E0221 09:04:00.104227 1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout E0221 09:04:00.106825 1 timeout.go:137] post-timeout activity - time-elapsed: 7.159098ms, PATCH "/api/v1/namespaces/default/events/enable-default-cni-20220221084933-6550.16d5c1b6db877374" result: E0221 09:04:00.107566 1 timeout.go:137] post-timeout activity - time-elapsed: 8.036034ms, PATCH "/api/v1/namespaces/kube-system/pods/etcd-enable-default-cni-20220221084933-6550/status" result: I0221 09:04:00.536074 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io I0221 09:04:00.666536 1 controller.go:611] quota admission added evaluator for: deployments.apps I0221 09:04:01.017550 1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10] I0221 09:04:01.029628 1 controller.go:611] quota admission added evaluator for: daemonsets.apps I0221 09:04:13.522118 1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps I0221 09:04:13.628443 1 controller.go:611] quota admission added evaluator for: replicasets.apps I0221 09:04:14.428054 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io I0221 09:08:27.303551 1 alloc.go:329] "allocated clusterIPs" service="default/netcat" clusterIPs=map[IPv4:10.99.224.251] * * ==> kube-controller-manager [9da67fbcae63] <== * I0221 09:04:13.466436 1 shared_informer.go:247] Caches are synced for daemon sets I0221 09:04:13.466456 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown I0221 09:04:13.504429 1 shared_informer.go:247] Caches are synced for persistent volume I0221 09:04:13.505653 1 shared_informer.go:247] Caches are synced for endpoint_slice I0221 09:04:13.514947 1 range_allocator.go:374] Set node enable-default-cni-20220221084933-6550 PodCIDR to [10.244.0.0/24] I0221 09:04:13.527153 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-z67wt" I0221 09:04:13.538371 1 shared_informer.go:247] Caches are synced for ReplicaSet I0221 09:04:13.602493 1 shared_informer.go:247] Caches are synced for attach detach I0221 09:04:13.613417 1 shared_informer.go:247] Caches are synced for disruption I0221 09:04:13.613442 1 disruption.go:371] Sending events to api server. I0221 09:04:13.614327 1 shared_informer.go:247] Caches are synced for deployment I0221 09:04:13.616020 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring I0221 09:04:13.631822 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2" I0221 09:04:13.641662 1 shared_informer.go:247] Caches are synced for endpoint I0221 09:04:13.646936 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-4pdmv" I0221 09:04:13.654908 1 shared_informer.go:247] Caches are synced for resource quota I0221 09:04:13.702047 1 shared_informer.go:247] Caches are synced for resource quota I0221 09:04:13.705044 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-mr75l" I0221 09:04:14.079886 1 shared_informer.go:247] Caches are synced for garbage collector I0221 09:04:14.114552 1 shared_informer.go:247] Caches are synced for garbage collector I0221 09:04:14.114583 1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0221 09:04:14.316540 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1" I0221 09:04:14.322975 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-4pdmv" I0221 09:08:27.304208 1 event.go:294] "Event occurred" object="default/netcat" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set netcat-668db85669 to 1" I0221 09:08:27.312062 1 event.go:294] "Event occurred" object="default/netcat-668db85669" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: netcat-668db85669-fm848" * * ==> kube-proxy [b198c3fa1558] <== * I0221 09:04:14.332138 1 node.go:163] Successfully retrieved node IP: 192.168.58.2 I0221 09:04:14.332226 1 server_others.go:138] "Detected node IP" address="192.168.58.2" I0221 09:04:14.332285 1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode="" I0221 09:04:14.423508 1 server_others.go:206] "Using iptables Proxier" I0221 09:04:14.423558 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4 I0221 09:04:14.423570 1 server_others.go:214] "Creating dualStackProxier for iptables" I0221 09:04:14.423591 1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6" I0221 09:04:14.424432 1 server.go:656] "Version info" version="v1.23.4" I0221 09:04:14.425726 1 config.go:226] "Starting endpoint slice config controller" I0221 09:04:14.425757 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I0221 09:04:14.425822 1 config.go:317] "Starting service config controller" I0221 09:04:14.425841 1 shared_informer.go:240] Waiting for caches to sync for service config I0221 09:04:14.526220 1 shared_informer.go:247] Caches are synced for service config I0221 09:04:14.526281 1 shared_informer.go:247] Caches are synced for endpoint slice config * * ==> kube-scheduler [6e0b11913ead] <== * W0221 09:03:54.430880 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0221 09:03:54.430893 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0221 09:03:54.430879 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0221 09:03:54.430925 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0221 09:03:54.431681 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0221 09:03:54.431730 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0221 09:03:54.502529 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0221 09:03:54.504139 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0221 09:03:55.250688 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0221 09:03:55.250722 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0221 09:03:55.362713 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0221 09:03:55.362749 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0221 09:03:55.411193 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0221 09:03:55.411222 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0221 09:03:55.505977 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0221 09:03:55.506014 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0221 09:03:55.539567 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0221 09:03:55.539607 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0221 09:03:55.556891 1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0221 09:03:55.556925 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0221 09:03:55.565614 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0221 09:03:55.565651 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0221 09:03:55.588882 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0221 09:03:55.588920 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope I0221 09:03:58.425033 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Mon 2022-02-21 09:03:40 UTC, end at Mon 2022-02-21 09:14:40 UTC. -- Feb 21 09:11:49 enable-default-cni-20220221084933-6550 kubelet[1965]: E0221 09:11:49.406721 1965 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b)\"" pod="kube-system/storage-provisioner" podUID=8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b Feb 21 09:12:01 enable-default-cni-20220221084933-6550 kubelet[1965]: I0221 09:12:01.406349 1965 scope.go:110] "RemoveContainer" containerID="1a6d835452a795794468d7dbb811bb2e752edb68fe42ac0215cf9efd022eaf85" Feb 21 09:12:01 enable-default-cni-20220221084933-6550 kubelet[1965]: E0221 09:12:01.406655 1965 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b)\"" pod="kube-system/storage-provisioner" podUID=8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b Feb 21 09:12:16 enable-default-cni-20220221084933-6550 kubelet[1965]: I0221 09:12:16.406724 1965 scope.go:110] "RemoveContainer" containerID="1a6d835452a795794468d7dbb811bb2e752edb68fe42ac0215cf9efd022eaf85" Feb 21 09:12:16 enable-default-cni-20220221084933-6550 kubelet[1965]: E0221 09:12:16.407062 1965 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b)\"" pod="kube-system/storage-provisioner" podUID=8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b Feb 21 09:12:29 enable-default-cni-20220221084933-6550 kubelet[1965]: I0221 09:12:29.406058 1965 scope.go:110] "RemoveContainer" containerID="1a6d835452a795794468d7dbb811bb2e752edb68fe42ac0215cf9efd022eaf85" Feb 21 09:12:29 enable-default-cni-20220221084933-6550 kubelet[1965]: E0221 09:12:29.406368 1965 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b)\"" pod="kube-system/storage-provisioner" podUID=8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b Feb 21 09:12:40 enable-default-cni-20220221084933-6550 kubelet[1965]: I0221 09:12:40.407282 1965 scope.go:110] "RemoveContainer" containerID="1a6d835452a795794468d7dbb811bb2e752edb68fe42ac0215cf9efd022eaf85" Feb 21 09:12:40 enable-default-cni-20220221084933-6550 kubelet[1965]: E0221 09:12:40.408161 1965 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b)\"" pod="kube-system/storage-provisioner" podUID=8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b Feb 21 09:12:51 enable-default-cni-20220221084933-6550 kubelet[1965]: I0221 09:12:51.406355 1965 scope.go:110] "RemoveContainer" containerID="1a6d835452a795794468d7dbb811bb2e752edb68fe42ac0215cf9efd022eaf85" Feb 21 09:12:51 enable-default-cni-20220221084933-6550 kubelet[1965]: E0221 09:12:51.406601 1965 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b)\"" pod="kube-system/storage-provisioner" podUID=8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b Feb 21 09:13:02 enable-default-cni-20220221084933-6550 kubelet[1965]: I0221 09:13:02.406042 1965 scope.go:110] "RemoveContainer" containerID="1a6d835452a795794468d7dbb811bb2e752edb68fe42ac0215cf9efd022eaf85" Feb 21 09:13:33 enable-default-cni-20220221084933-6550 kubelet[1965]: I0221 09:13:33.549421 1965 scope.go:110] "RemoveContainer" containerID="1a6d835452a795794468d7dbb811bb2e752edb68fe42ac0215cf9efd022eaf85" Feb 21 09:13:33 enable-default-cni-20220221084933-6550 kubelet[1965]: I0221 09:13:33.549735 1965 scope.go:110] "RemoveContainer" containerID="7e5453a7bbe4ab760a7f1e881bf9571a0c768135e214c50ab0e1fd6caa3623cd" Feb 21 09:13:33 enable-default-cni-20220221084933-6550 kubelet[1965]: E0221 09:13:33.550008 1965 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b)\"" pod="kube-system/storage-provisioner" podUID=8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b Feb 21 09:13:48 enable-default-cni-20220221084933-6550 kubelet[1965]: I0221 09:13:48.406541 1965 scope.go:110] "RemoveContainer" containerID="7e5453a7bbe4ab760a7f1e881bf9571a0c768135e214c50ab0e1fd6caa3623cd" Feb 21 09:13:48 enable-default-cni-20220221084933-6550 kubelet[1965]: E0221 09:13:48.406830 1965 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b)\"" pod="kube-system/storage-provisioner" podUID=8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b Feb 21 09:14:02 enable-default-cni-20220221084933-6550 kubelet[1965]: I0221 09:14:02.406168 1965 scope.go:110] "RemoveContainer" containerID="7e5453a7bbe4ab760a7f1e881bf9571a0c768135e214c50ab0e1fd6caa3623cd" Feb 21 09:14:02 enable-default-cni-20220221084933-6550 kubelet[1965]: E0221 09:14:02.406473 1965 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b)\"" pod="kube-system/storage-provisioner" podUID=8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b Feb 21 09:14:13 enable-default-cni-20220221084933-6550 kubelet[1965]: I0221 09:14:13.406773 1965 scope.go:110] "RemoveContainer" containerID="7e5453a7bbe4ab760a7f1e881bf9571a0c768135e214c50ab0e1fd6caa3623cd" Feb 21 09:14:13 enable-default-cni-20220221084933-6550 kubelet[1965]: E0221 09:14:13.407121 1965 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b)\"" pod="kube-system/storage-provisioner" podUID=8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b Feb 21 09:14:25 enable-default-cni-20220221084933-6550 kubelet[1965]: I0221 09:14:25.406669 1965 scope.go:110] "RemoveContainer" containerID="7e5453a7bbe4ab760a7f1e881bf9571a0c768135e214c50ab0e1fd6caa3623cd" Feb 21 09:14:25 enable-default-cni-20220221084933-6550 kubelet[1965]: E0221 09:14:25.406881 1965 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b)\"" pod="kube-system/storage-provisioner" podUID=8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b Feb 21 09:14:39 enable-default-cni-20220221084933-6550 kubelet[1965]: I0221 09:14:39.406097 1965 scope.go:110] "RemoveContainer" containerID="7e5453a7bbe4ab760a7f1e881bf9571a0c768135e214c50ab0e1fd6caa3623cd" Feb 21 09:14:39 enable-default-cni-20220221084933-6550 kubelet[1965]: E0221 09:14:39.406312 1965 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b)\"" pod="kube-system/storage-provisioner" podUID=8c315e0b-b7ce-4de5-b6cd-60f39b01fc6b * * ==> storage-provisioner [7e5453a7bbe4] <== * I0221 09:13:02.528428 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... F0221 09:13:32.530476 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout -- /stdout -- helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p enable-default-cni-20220221084933-6550 -n enable-default-cni-20220221084933-6550 helpers_test.go:262: (dbg) Run: kubectl --context enable-default-cni-20220221084933-6550 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running helpers_test.go:271: non-running pods: helpers_test.go:273: ======> post-mortem[TestNetworkPlugins/group/enable-default-cni]: describe non-running pods <====== helpers_test.go:276: (dbg) Run: kubectl --context enable-default-cni-20220221084933-6550 describe pod helpers_test.go:276: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220221084933-6550 describe pod : exit status 1 (39.997986ms) ** stderr ** error: resource name may not be empty ** /stderr ** helpers_test.go:278: kubectl --context enable-default-cni-20220221084933-6550 describe pod : exit status 1 helpers_test.go:176: Cleaning up "enable-default-cni-20220221084933-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p enable-default-cni-20220221084933-6550 helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p enable-default-cni-20220221084933-6550: (2.987234422s) --- FAIL: TestNetworkPlugins/group/enable-default-cni (671.75s) === FAIL: . TestNetworkPlugins/group/kubenet/DNS (370.31s) net_test.go:163: (dbg) Run: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.148930134s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.141584405s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.12948673s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.132475766s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.14308808s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 09:13:52.327955 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory net_test.go:163: (dbg) Run: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default E0221 09:13:55.511603 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory E0221 09:14:02.568935 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory E0221 09:14:05.984218 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.147684786s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 09:14:15.992056 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory net_test.go:163: (dbg) Run: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default E0221 09:14:23.049680 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.177565411s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 09:14:33.149259 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory net_test.go:163: (dbg) Run: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.136446775s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 09:15:04.010193 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory E0221 09:15:10.799839 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory net_test.go:163: (dbg) Run: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.123428364s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:163: (dbg) Run: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.13232504s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 09:16:16.370294 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory E0221 09:16:18.873156 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory E0221 09:16:25.930686 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory E0221 09:16:33.843826 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory E0221 09:16:46.065510 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory net_test.go:163: (dbg) Run: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.140494652s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** E0221 09:17:13.752037 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory E0221 09:17:30.568686 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory net_test.go:163: (dbg) Run: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default E0221 09:18:27.320235 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory E0221 09:18:27.325462 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory E0221 09:18:27.335697 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory E0221 09:18:27.355954 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory E0221 09:18:27.396226 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory E0221 09:18:27.476519 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory E0221 09:18:27.636908 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory E0221 09:18:27.957431 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory E0221 09:18:28.598254 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory E0221 09:18:29.174160 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory E0221 09:18:29.878647 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory E0221 09:18:32.439685 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory E0221 09:18:35.029781 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory E0221 09:18:37.560789 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.137813409s) -- stdout -- ;; connection timed out; no servers could be reached -- /stdout -- ** stderr ** command terminated with exit code 1 ** /stderr ** net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1 net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"* --- FAIL: TestNetworkPlugins/group/kubenet/DNS (370.31s) E0221 09:21:11.163791 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory E0221 09:21:16.370120 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory E0221 09:21:46.065878 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory E0221 09:21:58.037581 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory E0221 09:21:58.042867 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory E0221 09:21:58.053122 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory E0221 09:21:58.073463 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory E0221 09:21:58.113730 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory E0221 09:21:58.194079 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory E0221 09:21:58.354490 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory E0221 09:21:58.675033 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory E0221 09:21:59.315792 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory E0221 09:22:00.596200 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory E0221 09:22:03.157230 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory E0221 09:22:08.278024 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory E0221 09:22:16.217751 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory E0221 09:22:16.223072 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory E0221 09:22:16.233301 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory E0221 09:22:16.253567 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory E0221 09:22:16.293876 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory E0221 09:22:16.374850 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory E0221 09:22:16.535255 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory E0221 09:22:16.855806 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory E0221 09:22:17.496352 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory E0221 09:22:18.518194 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory E0221 09:22:18.776582 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory E0221 09:22:21.337172 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory E0221 09:22:26.457346 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory E0221 09:22:30.568387 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory E0221 09:22:36.698397 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory E0221 09:22:38.998520 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory E0221 09:22:39.415283 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory E0221 09:22:57.179497 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory E0221 09:23:12.221760 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory E0221 09:23:19.959136 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory E0221 09:23:27.319752 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory E0221 09:23:29.174104 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory E0221 09:23:35.029378 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory === FAIL: . TestNetworkPlugins/group/kubenet (678.23s) net_test.go:198: "kubenet" test finished in 29m4.799378418s, failed=true net_test.go:199: *** TestNetworkPlugins/group/kubenet FAILED at 2022-02-21 09:18:38.560907718 +0000 UTC m=+3211.323227311 helpers_test.go:223: -----------------------post-mortem-------------------------------- helpers_test.go:231: ======> post-mortem[TestNetworkPlugins/group/kubenet]: docker inspect <====== helpers_test.go:232: (dbg) Run: docker inspect kubenet-20220221084933-6550 helpers_test.go:236: (dbg) docker inspect kubenet-20220221084933-6550: -- stdout -- [ { "Id": "42de8a5f623e458c43fad768e7ffc095ac6135e6aaaf18e24be7e05f1fcee301", "Created": "2022-02-21T09:07:35.104979001Z", "Path": "/usr/local/bin/entrypoint", "Args": [ "/sbin/init" ], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 462899, "ExitCode": 0, "Error": "", "StartedAt": "2022-02-21T09:07:35.48618442Z", "FinishedAt": "0001-01-01T00:00:00Z" }, "Image": "sha256:1312ccd2422d964b2df363d606d0c016d6acbc1ddf0211c26a74717f2897dc43", "ResolvConfPath": "/var/lib/docker/containers/42de8a5f623e458c43fad768e7ffc095ac6135e6aaaf18e24be7e05f1fcee301/resolv.conf", "HostnamePath": "/var/lib/docker/containers/42de8a5f623e458c43fad768e7ffc095ac6135e6aaaf18e24be7e05f1fcee301/hostname", "HostsPath": "/var/lib/docker/containers/42de8a5f623e458c43fad768e7ffc095ac6135e6aaaf18e24be7e05f1fcee301/hosts", "LogPath": "/var/lib/docker/containers/42de8a5f623e458c43fad768e7ffc095ac6135e6aaaf18e24be7e05f1fcee301/42de8a5f623e458c43fad768e7ffc095ac6135e6aaaf18e24be7e05f1fcee301-json.log", "Name": "/kubenet-20220221084933-6550", "RestartCount": 0, "Driver": "overlay2", "Platform": "linux", "MountLabel": "", "ProcessLabel": "", "AppArmorProfile": "unconfined", "ExecIDs": null, "HostConfig": { "Binds": [ "/lib/modules:/lib/modules:ro", "kubenet-20220221084933-6550:/var" ], "ContainerIDFile": "", "LogConfig": { "Type": "json-file", "Config": {} }, "NetworkMode": "kubenet-20220221084933-6550", "PortBindings": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ] }, "RestartPolicy": { "Name": "no", "MaximumRetryCount": 0 }, "AutoRemove": false, "VolumeDriver": "", "VolumesFrom": null, "CapAdd": null, "CapDrop": null, "CgroupnsMode": "host", "Dns": [], "DnsOptions": [], "DnsSearch": [], "ExtraHosts": null, "GroupAdd": null, "IpcMode": "private", "Cgroup": "", "Links": null, "OomScoreAdj": 0, "PidMode": "", "Privileged": true, "PublishAllPorts": false, "ReadonlyRootfs": false, "SecurityOpt": [ "seccomp=unconfined", "apparmor=unconfined", "label=disable" ], "Tmpfs": { "/run": "", "/tmp": "" }, "UTSMode": "", "UsernsMode": "", "ShmSize": 67108864, "Runtime": "runc", "ConsoleSize": [ 0, 0 ], "Isolation": "", "CpuShares": 0, "Memory": 0, "NanoCpus": 2000000000, "CgroupParent": "", "BlkioWeight": 0, "BlkioWeightDevice": [], "BlkioDeviceReadBps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteIOps": null, "CpuPeriod": 0, "CpuQuota": 0, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": [ { "PathOnHost": "/dev/fuse", "PathInContainer": "/dev/fuse", "CgroupPermissions": "rwm" } ], "DeviceCgroupRules": null, "DeviceRequests": null, "KernelMemory": 0, "KernelMemoryTCP": 0, "MemoryReservation": 0, "MemorySwap": 0, "MemorySwappiness": null, "OomKillDisable": false, "PidsLimit": null, "Ulimits": null, "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0, "MaskedPaths": null, "ReadonlyPaths": null }, "GraphDriver": { "Data": { "LowerDir": "/var/lib/docker/overlay2/2453528f846de5519b124affa4bc267a8fd283c5a327498d46e2aedb037076cd-init/diff:/var/lib/docker/overlay2/a149cc5f42de5038ba3e5f21bd82a6356aa9456aec11a3690e569db72ca8eaf5/diff:/var/lib/docker/overlay2/fce4ef34d92ff253276822af091ec16578b924486e0e565c7f34b479bdbe1606/diff:/var/lib/docker/overlay2/a38e10ed288ad73a8ce637c6219d5b13318bb72210ce4fb6291dd57d238df611/diff:/var/lib/docker/overlay2/93c18ddce49a4eb2d344b0579d7f58df999d53bb329042f78a5b3f40296f83cd/diff:/var/lib/docker/overlay2/d9ee14262b31fff87924cf913d835b02182e6a8dc3783ecb5f67dbd29faa079c/diff:/var/lib/docker/overlay2/8a0b39fc5ebcb9761ad433102244050fab339d3e027a5a30a7e01d121fe362e4/diff:/var/lib/docker/overlay2/7a6b41deef37b6400fac0c94871f73040d4cae157de2e929b7a0bf96f0e353aa/diff:/var/lib/docker/overlay2/49d913ee587718efcf521a65de0109df5c2c42644f75690beed97a437d623b66/diff:/var/lib/docker/overlay2/89e8fa960313dfb6ece4007576bf5d89e2b0ea68a2bde9de5527647a7df6ff0f/diff:/var/lib/docker/overlay2/0d2632e6ee2f4198f3c7c0d37c31ee03d33864e0131ae85dd3001adecfae53db/diff:/var/lib/docker/overlay2/b28a3b2bbe2e231d00863aaaf6be28d2e5a76197f2be7934f18720a71f2435a8/diff:/var/lib/docker/overlay2/6f88e83ff5bb26bf909248d8df5eacf4d01f72523927fd7433a925c525fbd274/diff:/var/lib/docker/overlay2/c2547d18d2639fa842b0977952fa69cffd7b076e8db357ef0ec62fae9676619b/diff:/var/lib/docker/overlay2/34465ba37bb8eef87906c38cb891a738c54a16dbf47d27fef95cee96f0b62c4f/diff:/var/lib/docker/overlay2/50d3383518bb8e6a01e1cfcae621b8c2238dc54ae94d63606d145633045bc6cd/diff:/var/lib/docker/overlay2/83561f107aed2bf9cb840ff66aceb882d0e39bcced79ef27ce67c17e063ba536/diff:/var/lib/docker/overlay2/bfb7c0c91e0a7649286abe3a6b02df4585a931ffa3a6b1e4846aa1b98eb0d997/diff:/var/lib/docker/overlay2/80589044da151f3d3eed7824d236bc8a19697182abca8610eb34f1c3cae53bf0/diff:/var/lib/docker/overlay2/5a336e956e467ad50830aa4b9ed5b65cb3ef08e6d49a4900acd0440ca3c03c3a/diff:/var/lib/docker/overlay2/f7b3446c0dca80a2053183e360de8f99e3a31e87ad633fcc23f14b2045643596/diff:/var/lib/docker/overlay2/03688a16272f13f172ed8baa181df2a1702e08edbc9c5a4a767265adb2c00b79/diff:/var/lib/docker/overlay2/c5445afaabab58221dbffedde261b3f9948116ccba1344b6f3dd4fdfaf5c98cd/diff:/var/lib/docker/overlay2/704464fd20f23f93d0ad3c3c58103f01fc8a0d76d70bc191f718273e7412bb42/diff:/var/lib/docker/overlay2/a344f9ca1bf1b674ebf560c2a0498913ed90c836e3b4f5b1968022f7f7e54055/diff:/var/lib/docker/overlay2/929f053e6a327a7203e8d8c281f827d02ad922fdabbdfcc5453003daba88cdf4/diff:/var/lib/docker/overlay2/d6a160f0ffe94e5abf99d928a638f6a6220b24c7ba643a438c7557146d240f70/diff:/var/lib/docker/overlay2/c6b58e187a2df10fd55e19e6ad38c20f5ebf5e62617a59b6ce4afe0fc712c0f6/diff:/var/lib/docker/overlay2/fef43deeff878af42cc1f31a1f265b66e642aa166cec02eba5a0c82fb6d70872/diff:/var/lib/docker/overlay2/0d7bbf19182056a05efca89485df54da5fba9903000517fc9e7c2072aa7ba62e/diff:/var/lib/docker/overlay2/5840748964e9ce3890a2fa4fc2e3fb0635426f944b42b3128808a7aca816bf48/diff:/var/lib/docker/overlay2/a94126962e9f346960e3f570b96e0a6e987d255fcba30bf3fd76b227e66275c6/diff:/var/lib/docker/overlay2/211ba88e1557a352838b4896dff8efac70330c1424cff237613c4186994ec037/diff:/var/lib/docker/overlay2/c04720d23411d6693832990e2c79bc529ade609401eaeee8d159f5a6d74d986e/diff:/var/lib/docker/overlay2/70bfe328c89d3a3902a4ba591874fe936425b208ec8e39c2e25d60f21b212300/diff:/var/lib/docker/overlay2/7dcce8bec6dd9561387116c9f724b9101e94e15c0e19c7880ce2b4e96b2921a6/diff:/var/lib/docker/overlay2/81463813805e91be42d9f71b5c63823a80abaaa42d8638034eaddcd66c148310/diff:/var/lib/docker/overlay2/c7dbb1344cf7167b07d58d49902e0db9335bc92a71a324c15dadc14c0aa67c45/diff:/var/lib/docker/overlay2/909df0d6b92b92bcf6e376f0e33d120c53cd24ad0b997d31b5a84f3d1ab89769/diff:/var/lib/docker/overlay2/eacfeaee6ed4aa14e847adad2c6d03439c53944654cc2779cc5c80d239ab323c/diff:/var/lib/docker/overlay2/48c58492169f9043495b11b47439f4dcdeb168525e5bb3a8c130c1ddd5ec882d/diff:/var/lib/docker/overlay2/4520a69f91e766b7694473c2107cc0644394d65da900259eb5933779accdc86d/diff:/var/lib/docker/overlay2/68aafdd300c4b4ebf94976a352959e75fa170653a69e13da38650a447cd5cb2f/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394cfc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff", "MergedDir": "/var/lib/docker/overlay2/2453528f846de5519b124affa4bc267a8fd283c5a327498d46e2aedb037076cd/merged", "UpperDir": "/var/lib/docker/overlay2/2453528f846de5519b124affa4bc267a8fd283c5a327498d46e2aedb037076cd/diff", "WorkDir": "/var/lib/docker/overlay2/2453528f846de5519b124affa4bc267a8fd283c5a327498d46e2aedb037076cd/work" }, "Name": "overlay2" }, "Mounts": [ { "Type": "bind", "Source": "/lib/modules", "Destination": "/lib/modules", "Mode": "ro", "RW": false, "Propagation": "rprivate" }, { "Type": "volume", "Name": "kubenet-20220221084933-6550", "Source": "/var/lib/docker/volumes/kubenet-20220221084933-6550/_data", "Destination": "/var", "Driver": "local", "Mode": "z", "RW": true, "Propagation": "" } ], "Config": { "Hostname": "kubenet-20220221084933-6550", "Domainname": "", "User": "root", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "ExposedPorts": { "22/tcp": {}, "2376/tcp": {}, "32443/tcp": {}, "5000/tcp": {}, "8443/tcp": {} }, "Tty": true, "OpenStdin": false, "StdinOnce": false, "Env": [ "container=docker", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" ], "Cmd": null, "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2", "Volumes": null, "WorkingDir": "", "Entrypoint": [ "/usr/local/bin/entrypoint", "/sbin/init" ], "OnBuild": null, "Labels": { "created_by.minikube.sigs.k8s.io": "true", "mode.minikube.sigs.k8s.io": "kubenet-20220221084933-6550", "name.minikube.sigs.k8s.io": "kubenet-20220221084933-6550", "role.minikube.sigs.k8s.io": "" }, "StopSignal": "SIGRTMIN+3" }, "NetworkSettings": { "Bridge": "", "SandboxID": "2ea7ab169662f2e3ae922211e4f6950f7381d67a66339e43f5c5b1dcb14edbd2", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49399" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49398" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49395" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49397" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "49396" } ] }, "SandboxKey": "/var/run/docker/netns/2ea7ab169662", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null, "EndpointID": "", "Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "MacAddress": "", "Networks": { "kubenet-20220221084933-6550": { "IPAMConfig": { "IPv4Address": "192.168.76.2" }, "Links": null, "Aliases": [ "42de8a5f623e", "kubenet-20220221084933-6550" ], "NetworkID": "645548ce5696d8ac0208ac4f08e5263e8d80d8e1b04d7feaec6b203ababf5d53", "EndpointID": "ea840adb457037b7385a6cfe70ee74ea986517b42f5ffaf7dfa1e98ec5039916", "Gateway": "192.168.76.1", "IPAddress": "192.168.76.2", "IPPrefixLen": 24, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:c0:a8:4c:02", "DriverOpts": null } } } } ] -- /stdout -- helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p kubenet-20220221084933-6550 -n kubenet-20220221084933-6550 helpers_test.go:245: <<< TestNetworkPlugins/group/kubenet FAILED: start of post-mortem logs <<< helpers_test.go:246: ======> post-mortem[TestNetworkPlugins/group/kubenet]: minikube logs <====== helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 -p kubenet-20220221084933-6550 logs -n 25 helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p kubenet-20220221084933-6550 logs -n 25: (1.226452639s) helpers_test.go:253: TestNetworkPlugins/group/kubenet logs: -- stdout -- * * ==> Audit <== * |---------|---------------------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|---------------------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| | ssh | -p kindnet-20220221084934-6550 | kindnet-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:40 UTC | Mon, 21 Feb 2022 09:03:40 UTC | | | pgrep -a kubelet | | | | | | | -p | calico-20220221084934-6550 | calico-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:45 UTC | Mon, 21 Feb 2022 09:03:47 UTC | | | logs -n 25 | | | | | | | delete | -p calico-20220221084934-6550 | calico-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:48 UTC | Mon, 21 Feb 2022 09:03:50 UTC | | -p | auto-20220221084933-6550 logs | auto-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:07:20 UTC | Mon, 21 Feb 2022 09:07:22 UTC | | | -n 25 | | | | | | | delete | -p auto-20220221084933-6550 | auto-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:07:22 UTC | Mon, 21 Feb 2022 09:07:25 UTC | | start | -p | enable-default-cni-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:32 UTC | Mon, 21 Feb 2022 09:08:26 UTC | | | enable-default-cni-20220221084933-6550 | | | | | | | | --memory=2048 --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --enable-default-cni=true | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p | enable-default-cni-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:08:26 UTC | Mon, 21 Feb 2022 09:08:27 UTC | | | enable-default-cni-20220221084933-6550 | | | | | | | | pgrep -a kubelet | | | | | | | start | -p bridge-20220221084933-6550 | bridge-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:03:51 UTC | Mon, 21 Feb 2022 09:08:41 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=bridge --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p bridge-20220221084933-6550 | bridge-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:08:41 UTC | Mon, 21 Feb 2022 09:08:41 UTC | | | pgrep -a kubelet | | | | | | | -p | kindnet-20220221084934-6550 | kindnet-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:09:44 UTC | Mon, 21 Feb 2022 09:09:45 UTC | | | logs -n 25 | | | | | | | delete | -p kindnet-20220221084934-6550 | kindnet-20220221084934-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:09:45 UTC | Mon, 21 Feb 2022 09:09:48 UTC | | start | -p | old-k8s-version-20220221090948-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:09:48 UTC | Mon, 21 Feb 2022 09:11:57 UTC | | | old-k8s-version-20220221090948-6550 | | | | | | | | --memory=2200 --alsologtostderr | | | | | | | | --wait=true --kvm-network=default | | | | | | | | --kvm-qemu-uri=qemu:///system | | | | | | | | --disable-driver-mounts | | | | | | | | --keep-context=false | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | | --kubernetes-version=v1.16.0 | | | | | | | addons | enable metrics-server -p | old-k8s-version-20220221090948-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:12:06 UTC | Mon, 21 Feb 2022 09:12:06 UTC | | | old-k8s-version-20220221090948-6550 | | | | | | | | --images=MetricsServer=k8s.gcr.io/echoserver:1.4 | | | | | | | | --registries=MetricsServer=fake.domain | | | | | | | start | -p kubenet-20220221084933-6550 | kubenet-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:07:25 UTC | Mon, 21 Feb 2022 09:12:15 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --network-plugin=kubenet | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | ssh | -p kubenet-20220221084933-6550 | kubenet-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:12:15 UTC | Mon, 21 Feb 2022 09:12:15 UTC | | | pgrep -a kubelet | | | | | | | stop | -p | old-k8s-version-20220221090948-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:12:06 UTC | Mon, 21 Feb 2022 09:12:17 UTC | | | old-k8s-version-20220221090948-6550 | | | | | | | | --alsologtostderr -v=3 | | | | | | | addons | enable dashboard -p | old-k8s-version-20220221090948-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:12:17 UTC | Mon, 21 Feb 2022 09:12:17 UTC | | | old-k8s-version-20220221090948-6550 | | | | | | | | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 | | | | | | | -p | bridge-20220221084933-6550 | bridge-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:13:35 UTC | Mon, 21 Feb 2022 09:13:36 UTC | | | logs -n 25 | | | | | | | delete | -p bridge-20220221084933-6550 | bridge-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:13:36 UTC | Mon, 21 Feb 2022 09:13:39 UTC | | start | -p no-preload-20220221091339-6550 | no-preload-20220221091339-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:13:39 UTC | Mon, 21 Feb 2022 09:14:33 UTC | | | --memory=2200 --alsologtostderr | | | | | | | | --wait=true --preload=false | | | | | | | | --driver=docker | | | | | | | | --container-runtime=docker | | | | | | | | --kubernetes-version=v1.23.5-rc.0 | | | | | | | -p | enable-default-cni-20220221084933-6550 | enable-default-cni-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:14:39 UTC | Mon, 21 Feb 2022 09:14:40 UTC | | | logs -n 25 | | | | | | | addons | enable metrics-server -p | no-preload-20220221091339-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:14:42 UTC | Mon, 21 Feb 2022 09:14:43 UTC | | | no-preload-20220221091339-6550 | | | | | | | | --images=MetricsServer=k8s.gcr.io/echoserver:1.4 | | | | | | | | --registries=MetricsServer=fake.domain | | | | | | | delete | -p | enable-default-cni-20220221084933-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:14:40 UTC | Mon, 21 Feb 2022 09:14:43 UTC | | | enable-default-cni-20220221084933-6550 | | | | | | | stop | -p | no-preload-20220221091339-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:14:43 UTC | Mon, 21 Feb 2022 09:14:54 UTC | | | no-preload-20220221091339-6550 | | | | | | | | --alsologtostderr -v=3 | | | | | | | addons | enable dashboard -p | no-preload-20220221091339-6550 | jenkins | v1.25.1 | Mon, 21 Feb 2022 09:14:54 UTC | Mon, 21 Feb 2022 09:14:54 UTC | | | no-preload-20220221091339-6550 | | | | | | | | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 | | | | | | |---------|---------------------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------| * * ==> Last Start <== * Log file created at: 2022/02/21 09:14:54 Running on machine: ubuntu-20-agent-5 Binary: Built with gc go1.17.7 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0221 09:14:54.373674 497077 out.go:297] Setting OutFile to fd 1 ... I0221 09:14:54.373746 497077 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 09:14:54.373749 497077 out.go:310] Setting ErrFile to fd 2... I0221 09:14:54.373753 497077 out.go:344] TERM=,COLORTERM=, which probably does not support color I0221 09:14:54.373852 497077 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/bin I0221 09:14:54.374071 497077 out.go:304] Setting JSON to false I0221 09:14:54.375981 497077 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":3449,"bootTime":1645431446,"procs":953,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"} I0221 09:14:54.376074 497077 start.go:122] virtualization: kvm guest I0221 09:14:54.378621 497077 out.go:176] * [no-preload-20220221091339-6550] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64) I0221 09:14:54.380233 497077 out.go:176] - MINIKUBE_LOCATION=13641 I0221 09:14:54.378810 497077 notify.go:193] Checking for updates... I0221 09:14:54.381954 497077 out.go:176] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true I0221 09:14:54.387076 497077 out.go:176] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 09:14:54.389173 497077 out.go:176] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube I0221 09:14:54.392021 497077 out.go:176] - MINIKUBE_BIN=out/minikube-linux-amd64 I0221 09:14:54.393093 497077 config.go:176] Loaded profile config "no-preload-20220221091339-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5-rc.0 I0221 09:14:54.394047 497077 driver.go:344] Setting default libvirt URI to qemu:///system I0221 09:14:54.455714 497077 docker.go:132] docker version: linux-20.10.12 I0221 09:14:54.455798 497077 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 09:14:54.574019 497077 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:49 SystemTime:2022-02-21 09:14:54.499125125 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} I0221 09:14:54.574121 497077 docker.go:237] overlay module found I0221 09:14:54.576244 497077 out.go:176] * Using the docker driver based on existing profile I0221 09:14:54.576277 497077 start.go:281] selected driver: docker I0221 09:14:54.576284 497077 start.go:798] validating driver "docker" against &{Name:no-preload-20220221091339-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5-rc.0 ClusterName:no-preload-20220221091339-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.5-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 09:14:54.576403 497077 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} W0221 09:14:54.576451 497077 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. W0221 09:14:54.576475 497077 out.go:241] ! Your cgroup does not allow setting memory. I0221 09:14:54.577679 497077 out.go:176] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities I0221 09:14:54.578448 497077 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0221 09:14:54.701284 497077 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:61 SystemTime:2022-02-21 09:14:54.63216937 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:}} W0221 09:14:54.701445 497077 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. W0221 09:14:54.701470 497077 out.go:241] ! Your cgroup does not allow setting memory. I0221 09:14:54.703498 497077 out.go:176] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities I0221 09:14:54.703624 497077 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] I0221 09:14:54.703651 497077 cni.go:93] Creating CNI manager for "" I0221 09:14:54.703661 497077 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0221 09:14:54.703673 497077 start_flags.go:302] config: {Name:no-preload-20220221091339-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5-rc.0 ClusterName:no-preload-20220221091339-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.5-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 09:14:54.705486 497077 out.go:176] * Starting control plane node no-preload-20220221091339-6550 in cluster no-preload-20220221091339-6550 I0221 09:14:54.705526 497077 cache.go:120] Beginning downloading kic base image for docker with docker I0221 09:14:54.706834 497077 out.go:176] * Pulling base image ... I0221 09:14:54.706871 497077 preload.go:132] Checking if preload exists for k8s version v1.23.5-rc.0 and runtime docker I0221 09:14:54.706968 497077 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon I0221 09:14:54.707179 497077 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/config.json ... I0221 09:14:54.707301 497077 cache.go:107] acquiring lock: {Name:mk9f52e4209628388c7268565716f70b6a94e740 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:14:54.707300 497077 cache.go:107] acquiring lock: {Name:mkae39637d54454769ea96c0928557495a2624a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:14:54.707484 497077 cache.go:107] acquiring lock: {Name:mk8eae83c87e69d4f61d57feebab23b9c618f6ed Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:14:54.707532 497077 cache.go:107] acquiring lock: {Name:mkc848fd9c1e80ffd1414dd8603c19c641b3fcb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:14:54.707582 497077 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7 exists I0221 09:14:54.707598 497077 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 exists I0221 09:14:54.707615 497077 cache.go:96] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.7" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7" took 326.476µs I0221 09:14:54.707636 497077 cache.go:80] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.7 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7 succeeded I0221 09:14:54.707620 497077 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6" took 142.737µs I0221 09:14:54.707654 497077 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 succeeded I0221 09:14:54.707642 497077 cache.go:107] acquiring lock: {Name:mk8cb7540d8a1bd7faccdcc974630f93843749a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:14:54.707669 497077 cache.go:107] acquiring lock: {Name:mk0340c3f1bf4216c7deeea4078501a3da4b3533 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:14:54.707679 497077 cache.go:107] acquiring lock: {Name:mk048af2cde148e8a512f7653817cea4bb1a47e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:14:54.707701 497077 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists I0221 09:14:54.707723 497077 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.5-rc.0 exists I0221 09:14:54.707739 497077 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.5-rc.0 exists I0221 09:14:54.707741 497077 cache.go:107] acquiring lock: {Name:mkd0cd2ae3afc8e39e716bbcd5f1e196bdbc0e1b Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:14:54.707764 497077 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.23.5-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.5-rc.0" took 86.028µs I0221 09:14:54.707781 497077 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.23.5-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.5-rc.0 succeeded I0221 09:14:54.707765 497077 cache.go:107] acquiring lock: {Name:mkf4838fe0f0754a09f1960b33e83e9fd73716a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:14:54.707782 497077 cache.go:107] acquiring lock: {Name:mk4db3a52d1f4fba9dc9223f3164cb8742f00f2f Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:14:54.707715 497077 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 75.715µs I0221 09:14:54.707806 497077 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded I0221 09:14:54.707799 497077 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1 exists I0221 09:14:54.707823 497077 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.5-rc.0 exists I0221 09:14:54.707829 497077 cache.go:96] cache image "docker.io/kubernetesui/dashboard:v2.3.1" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1" took 90.577µs I0221 09:14:54.707841 497077 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.23.5-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.5-rc.0" took 78.398µs I0221 09:14:54.707851 497077 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.23.5-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.5-rc.0 succeeded I0221 09:14:54.707881 497077 cache.go:80] save to tar file docker.io/kubernetesui/dashboard:v2.3.1 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1 succeeded I0221 09:14:54.707744 497077 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.23.5-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.5-rc.0" took 77.49µs I0221 09:14:54.707899 497077 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.23.5-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.5-rc.0 succeeded I0221 09:14:54.707636 497077 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 exists I0221 09:14:54.707919 497077 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.1-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0" took 427.987µs I0221 09:14:54.707938 497077 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.1-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 succeeded I0221 09:14:54.707837 497077 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 exists I0221 09:14:54.707962 497077 cache.go:96] cache image "k8s.gcr.io/pause:3.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6" took 180.957µs I0221 09:14:54.707545 497077 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.5-rc.0 exists I0221 09:14:54.707977 497077 cache.go:80] save to tar file k8s.gcr.io/pause:3.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 succeeded I0221 09:14:54.707996 497077 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.23.5-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.5-rc.0" took 707.222µs I0221 09:14:54.708010 497077 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.23.5-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.5-rc.0 succeeded I0221 09:14:54.708017 497077 cache.go:87] Successfully saved all images to host disk. I0221 09:14:54.757072 497077 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon, skipping pull I0221 09:14:54.757120 497077 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 exists in daemon, skipping load I0221 09:14:54.757137 497077 cache.go:208] Successfully downloaded all kic artifacts I0221 09:14:54.757204 497077 start.go:313] acquiring machines lock for no-preload-20220221091339-6550: {Name:mk3240de6571e839de8f8161d174b6e05c7d8988 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0221 09:14:54.757325 497077 start.go:317] acquired machines lock for "no-preload-20220221091339-6550" in 98.473µs I0221 09:14:54.757349 497077 start.go:93] Skipping create...Using existing machine configuration I0221 09:14:54.757361 497077 fix.go:55] fixHost starting: I0221 09:14:54.757661 497077 cli_runner.go:133] Run: docker container inspect no-preload-20220221091339-6550 --format={{.State.Status}} I0221 09:14:54.793061 497077 fix.go:108] recreateIfNeeded on no-preload-20220221091339-6550: state=Stopped err= W0221 09:14:54.793108 497077 fix.go:134] unexpected machine state, will restart: I0221 09:14:54.065834 495766 cli_runner.go:133] Run: docker container inspect embed-certs-20220221091443-6550 --format={{.State.Running}} I0221 09:14:54.108359 495766 cli_runner.go:133] Run: docker container inspect embed-certs-20220221091443-6550 --format={{.State.Status}} I0221 09:14:54.147246 495766 cli_runner.go:133] Run: docker exec embed-certs-20220221091443-6550 stat /var/lib/dpkg/alternatives/iptables I0221 09:14:54.252735 495766 oci.go:281] the created container "embed-certs-20220221091443-6550" has a running status. I0221 09:14:54.252787 495766 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/embed-certs-20220221091443-6550/id_rsa... I0221 09:14:54.394587 495766 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/embed-certs-20220221091443-6550/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0221 09:14:54.497420 495766 cli_runner.go:133] Run: docker container inspect embed-certs-20220221091443-6550 --format={{.State.Status}} I0221 09:14:54.538065 495766 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0221 09:14:54.538091 495766 kic_runner.go:114] Args: [docker exec --privileged embed-certs-20220221091443-6550 chown docker:docker /home/docker/.ssh/authorized_keys] I0221 09:14:54.646684 495766 cli_runner.go:133] Run: docker container inspect embed-certs-20220221091443-6550 --format={{.State.Status}} I0221 09:14:54.683698 495766 machine.go:88] provisioning docker machine ... I0221 09:14:54.683738 495766 ubuntu.go:169] provisioning hostname "embed-certs-20220221091443-6550" I0221 09:14:54.683812 495766 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220221091443-6550 I0221 09:14:54.721118 495766 main.go:130] libmachine: Using SSH client type: native I0221 09:14:54.721290 495766 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49419 } I0221 09:14:54.721306 495766 main.go:130] libmachine: About to run SSH command: sudo hostname embed-certs-20220221091443-6550 && echo "embed-certs-20220221091443-6550" | sudo tee /etc/hostname I0221 09:14:54.863859 495766 main.go:130] libmachine: SSH cmd err, output: : embed-certs-20220221091443-6550 I0221 09:14:54.863929 495766 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220221091443-6550 I0221 09:14:54.901280 495766 main.go:130] libmachine: Using SSH client type: native I0221 09:14:54.901415 495766 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49419 } I0221 09:14:54.901436 495766 main.go:130] libmachine: About to run SSH command: if ! grep -xq '.*\sembed-certs-20220221091443-6550' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220221091443-6550/g' /etc/hosts; else echo '127.0.1.1 embed-certs-20220221091443-6550' | sudo tee -a /etc/hosts; fi fi I0221 09:14:55.027077 495766 main.go:130] libmachine: SSH cmd err, output: : I0221 09:14:55.027115 495766 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube} I0221 09:14:55.027158 495766 ubuntu.go:177] setting up certificates I0221 09:14:55.027175 495766 provision.go:83] configureAuth start I0221 09:14:55.027236 495766 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220221091443-6550 I0221 09:14:55.064958 495766 provision.go:138] copyHostCerts I0221 09:14:55.065021 495766 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem, removing ... I0221 09:14:55.065036 495766 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem I0221 09:14:55.065109 495766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem (1082 bytes) I0221 09:14:55.065213 495766 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem, removing ... I0221 09:14:55.065231 495766 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem I0221 09:14:55.065265 495766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem (1123 bytes) I0221 09:14:55.065329 495766 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem, removing ... I0221 09:14:55.065341 495766 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem I0221 09:14:55.065370 495766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem (1675 bytes) I0221 09:14:55.065422 495766 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220221091443-6550 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220221091443-6550] I0221 09:14:55.190131 495766 provision.go:172] copyRemoteCerts I0221 09:14:55.190182 495766 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0221 09:14:55.190228 495766 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220221091443-6550 I0221 09:14:55.229697 495766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49419 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/embed-certs-20220221091443-6550/id_rsa Username:docker} I0221 09:14:55.322901 495766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes) I0221 09:14:55.342173 495766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes) I0221 09:14:55.361624 495766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0221 09:14:55.385423 495766 provision.go:86] duration metric: configureAuth took 358.231938ms I0221 09:14:55.385454 495766 ubuntu.go:193] setting minikube options for container-runtime I0221 09:14:55.385648 495766 config.go:176] Loaded profile config "embed-certs-20220221091443-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:14:55.385706 495766 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220221091443-6550 I0221 09:14:55.422978 495766 main.go:130] libmachine: Using SSH client type: native I0221 09:14:55.423143 495766 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49419 } I0221 09:14:55.423160 495766 main.go:130] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0221 09:14:55.551351 495766 main.go:130] libmachine: SSH cmd err, output: : overlay I0221 09:14:55.551374 495766 ubuntu.go:71] root file system type: overlay I0221 09:14:55.551603 495766 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0221 09:14:55.551680 495766 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220221091443-6550 I0221 09:14:55.592738 495766 main.go:130] libmachine: Using SSH client type: native I0221 09:14:55.592917 495766 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49419 } I0221 09:14:55.592983 495766 main.go:130] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0221 09:14:55.728704 495766 main.go:130] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0221 09:14:55.728787 495766 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220221091443-6550 I0221 09:14:55.763665 495766 main.go:130] libmachine: Using SSH client type: native I0221 09:14:55.763863 495766 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49419 } I0221 09:14:55.763893 495766 main.go:130] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0221 09:14:56.422335 495766 main.go:130] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2021-12-13 11:43:42.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2022-02-21 09:14:55.724332118 +0000 @@ -1,30 +1,32 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com +BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target -Requires=docker.socket containerd.service +Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always - -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 +Restart=on-failure -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +34,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0221 09:14:56.422379 495766 machine.go:91] provisioned docker machine in 1.738656889s I0221 09:14:56.422390 495766 client.go:171] LocalClient.Create took 12.24132238s I0221 09:14:56.422400 495766 start.go:168] duration metric: libmachine.API.Create for "embed-certs-20220221091443-6550" took 12.241377204s I0221 09:14:56.422410 495766 start.go:267] post-start starting for "embed-certs-20220221091443-6550" (driver="docker") I0221 09:14:56.422415 495766 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0221 09:14:56.422480 495766 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0221 09:14:56.422542 495766 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220221091443-6550 I0221 09:14:56.456066 495766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49419 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/embed-certs-20220221091443-6550/id_rsa Username:docker} I0221 09:14:56.542630 495766 ssh_runner.go:195] Run: cat /etc/os-release I0221 09:14:56.545460 495766 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0221 09:14:56.545480 495766 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0221 09:14:56.545491 495766 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0221 09:14:56.545497 495766 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0221 09:14:56.545508 495766 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/addons for local assets ... I0221 09:14:56.545569 495766 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files for local assets ... I0221 09:14:56.545648 495766 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem -> 65502.pem in /etc/ssl/certs I0221 09:14:56.545743 495766 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs I0221 09:14:56.552603 495766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /etc/ssl/certs/65502.pem (1708 bytes) I0221 09:14:56.569802 495766 start.go:270] post-start completed in 147.380893ms I0221 09:14:56.570107 495766 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220221091443-6550 I0221 09:14:56.602861 495766 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/config.json ... I0221 09:14:56.603136 495766 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0221 09:14:56.603185 495766 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220221091443-6550 I0221 09:14:56.636423 495766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49419 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/embed-certs-20220221091443-6550/id_rsa Username:docker} I0221 09:14:56.719646 495766 start.go:129] duration metric: createHost completed in 12.541291945s I0221 09:14:56.719670 495766 start.go:80] releasing machines lock for "embed-certs-20220221091443-6550", held for 12.541422547s I0221 09:14:56.719749 495766 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220221091443-6550 I0221 09:14:56.755073 495766 ssh_runner.go:195] Run: systemctl --version I0221 09:14:56.755120 495766 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220221091443-6550 I0221 09:14:56.755168 495766 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/ I0221 09:14:56.755217 495766 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220221091443-6550 I0221 09:14:56.790615 495766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49419 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/embed-certs-20220221091443-6550/id_rsa Username:docker} I0221 09:14:56.792442 495766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49419 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/embed-certs-20220221091443-6550/id_rsa Username:docker} I0221 09:14:57.020347 495766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0221 09:14:57.030464 495766 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 09:14:57.041630 495766 cruntime.go:272] skipping containerd shutdown because we are bound to it I0221 09:14:57.041684 495766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0221 09:14:57.051394 495766 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0221 09:14:57.064671 495766 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0221 09:14:57.148196 495766 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0221 09:14:57.232221 495766 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 09:14:57.242443 495766 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0221 09:14:57.322703 495766 ssh_runner.go:195] Run: sudo systemctl start docker I0221 09:14:57.332494 495766 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 09:14:57.375245 495766 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 09:14:57.417612 495766 out.go:203] * Preparing Kubernetes v1.23.4 on Docker 20.10.12 ... I0221 09:14:57.417696 495766 cli_runner.go:133] Run: docker network inspect embed-certs-20220221091443-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 09:14:57.450706 495766 ssh_runner.go:195] Run: grep 192.168.58.1 host.minikube.internal$ /etc/hosts I0221 09:14:57.454061 495766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 09:14:53.366557 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:14:55.367507 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:14:57.367593 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:14:57.465653 495766 out.go:176] - kubelet.housekeeping-interval=5m I0221 09:14:57.465719 495766 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker I0221 09:14:57.465769 495766 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0221 09:14:57.499249 495766 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.4 k8s.gcr.io/kube-proxy:v1.23.4 k8s.gcr.io/kube-scheduler:v1.23.4 k8s.gcr.io/kube-controller-manager:v1.23.4 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0221 09:14:57.499329 495766 docker.go:537] Images already preloaded, skipping extraction I0221 09:14:57.499379 495766 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0221 09:14:57.534216 495766 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.4 k8s.gcr.io/kube-proxy:v1.23.4 k8s.gcr.io/kube-controller-manager:v1.23.4 k8s.gcr.io/kube-scheduler:v1.23.4 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0221 09:14:57.534243 495766 cache_images.go:84] Images are preloaded, skipping loading I0221 09:14:57.534282 495766 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0221 09:14:57.620204 495766 cni.go:93] Creating CNI manager for "" I0221 09:14:57.620227 495766 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0221 09:14:57.620235 495766 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0221 09:14:57.620247 495766 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220221091443-6550 NodeName:embed-certs-20220221091443-6550 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0221 09:14:57.620360 495766 kubeadm.go:162] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.58.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "embed-certs-20220221091443-6550" kubeletExtraArgs: node-ip: 192.168.58.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.58.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.23.4 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0221 09:14:57.620435 495766 kubeadm.go:936] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.23.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=embed-certs-20220221091443-6550 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2 [Install] config: {KubernetesVersion:v1.23.4 ClusterName:embed-certs-20220221091443-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0221 09:14:57.620483 495766 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.4 I0221 09:14:57.627544 495766 binaries.go:44] Found k8s binaries, skipping transfer I0221 09:14:57.627599 495766 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0221 09:14:57.634610 495766 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (384 bytes) I0221 09:14:57.647700 495766 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0221 09:14:57.660906 495766 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2053 bytes) I0221 09:14:57.674055 495766 ssh_runner.go:195] Run: grep 192.168.58.2 control-plane.minikube.internal$ /etc/hosts I0221 09:14:57.677021 495766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 09:14:57.686472 495766 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550 for IP: 192.168.58.2 I0221 09:14:57.686582 495766 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key I0221 09:14:57.686626 495766 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key I0221 09:14:57.686684 495766 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/client.key I0221 09:14:57.686698 495766 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/client.crt with IP's: [] I0221 09:14:57.788229 495766 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/client.crt ... I0221 09:14:57.788262 495766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/client.crt: {Name:mkec8981966785f7e07560a482d7402b98e81ab6 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:14:57.788468 495766 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/client.key ... I0221 09:14:57.788484 495766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/client.key: {Name:mkffe615b6963103dbeccb0665b05a85c8805e77 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:14:57.788566 495766 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/apiserver.key.cee25041 I0221 09:14:57.788581 495766 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1] I0221 09:14:57.856333 495766 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/apiserver.crt.cee25041 ... I0221 09:14:57.856373 495766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/apiserver.crt.cee25041: {Name:mk61adee2b3ddd19cca3a47f6f629fd31c40a64e Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:14:57.856592 495766 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/apiserver.key.cee25041 ... I0221 09:14:57.856609 495766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/apiserver.key.cee25041: {Name:mkb6619dc2a52f5977bfa969c6373ef50a0410aa Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:14:57.856711 495766 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/apiserver.crt I0221 09:14:57.856771 495766 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/apiserver.key I0221 09:14:57.856815 495766 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/proxy-client.key I0221 09:14:57.856829 495766 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/proxy-client.crt with IP's: [] I0221 09:14:57.968944 495766 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/proxy-client.crt ... I0221 09:14:57.968975 495766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/proxy-client.crt: {Name:mk1a6a4f1101db5f82e9a1d9b328dd92800d4dfb Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:14:57.969176 495766 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/proxy-client.key ... I0221 09:14:57.969193 495766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/proxy-client.key: {Name:mk1192e141df4adaca670a33ef20c34eebac4456 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:14:57.969374 495766 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem (1338 bytes) W0221 09:14:57.969413 495766 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550_empty.pem, impossibly tiny 0 bytes I0221 09:14:57.969427 495766 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem (1679 bytes) I0221 09:14:57.969452 495766 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem (1082 bytes) I0221 09:14:57.969477 495766 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem (1123 bytes) I0221 09:14:57.969509 495766 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem (1675 bytes) I0221 09:14:57.969549 495766 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem (1708 bytes) I0221 09:14:57.970447 495766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0221 09:14:57.988891 495766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0221 09:14:58.006496 495766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0221 09:14:58.024165 495766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/embed-certs-20220221091443-6550/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I0221 09:14:58.041995 495766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0221 09:14:58.060449 495766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0221 09:14:58.078267 495766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0221 09:14:58.095860 495766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0221 09:14:58.113569 495766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0221 09:14:58.131351 495766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem --> /usr/share/ca-certificates/6550.pem (1338 bytes) I0221 09:14:58.149204 495766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /usr/share/ca-certificates/65502.pem (1708 bytes) I0221 09:14:58.167017 495766 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0221 09:14:58.179866 495766 ssh_runner.go:195] Run: openssl version I0221 09:14:58.184620 495766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65502.pem && ln -fs /usr/share/ca-certificates/65502.pem /etc/ssl/certs/65502.pem" I0221 09:14:58.192167 495766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65502.pem I0221 09:14:58.195132 495766 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Feb 21 08:30 /usr/share/ca-certificates/65502.pem I0221 09:14:58.195172 495766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65502.pem I0221 09:14:58.199966 495766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65502.pem /etc/ssl/certs/3ec20f2e.0" I0221 09:14:58.207367 495766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0221 09:14:58.214716 495766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0221 09:14:58.217752 495766 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb 21 08:26 /usr/share/ca-certificates/minikubeCA.pem I0221 09:14:58.217791 495766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0221 09:14:58.222703 495766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0221 09:14:58.230623 495766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6550.pem && ln -fs /usr/share/ca-certificates/6550.pem /etc/ssl/certs/6550.pem" I0221 09:14:58.238011 495766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6550.pem I0221 09:14:58.241207 495766 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Feb 21 08:30 /usr/share/ca-certificates/6550.pem I0221 09:14:58.241262 495766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6550.pem I0221 09:14:58.246138 495766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6550.pem /etc/ssl/certs/51391683.0" I0221 09:14:58.254096 495766 kubeadm.go:391] StartCluster: {Name:embed-certs-20220221091443-6550 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:embed-certs-20220221091443-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 09:14:58.254217 495766 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0221 09:14:58.286449 495766 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0221 09:14:58.293703 495766 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0221 09:14:58.300962 495766 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver I0221 09:14:58.301022 495766 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0221 09:14:58.307987 495766 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0221 09:14:58.308037 495766 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0221 09:14:54.795416 497077 out.go:176] * Restarting existing docker container for "no-preload-20220221091339-6550" ... I0221 09:14:54.795480 497077 cli_runner.go:133] Run: docker start no-preload-20220221091339-6550 I0221 09:14:55.189786 497077 cli_runner.go:133] Run: docker container inspect no-preload-20220221091339-6550 --format={{.State.Status}} I0221 09:14:55.229409 497077 kic.go:420] container "no-preload-20220221091339-6550" state is running. I0221 09:14:55.229776 497077 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220221091339-6550 I0221 09:14:55.265712 497077 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/config.json ... I0221 09:14:55.265927 497077 machine.go:88] provisioning docker machine ... I0221 09:14:55.265950 497077 ubuntu.go:169] provisioning hostname "no-preload-20220221091339-6550" I0221 09:14:55.265997 497077 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:14:55.300719 497077 main.go:130] libmachine: Using SSH client type: native I0221 09:14:55.300947 497077 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49424 } I0221 09:14:55.300962 497077 main.go:130] libmachine: About to run SSH command: sudo hostname no-preload-20220221091339-6550 && echo "no-preload-20220221091339-6550" | sudo tee /etc/hostname I0221 09:14:55.301593 497077 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42292->127.0.0.1:49424: read: connection reset by peer I0221 09:14:58.437161 497077 main.go:130] libmachine: SSH cmd err, output: : no-preload-20220221091339-6550 I0221 09:14:58.437240 497077 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:14:58.471291 497077 main.go:130] libmachine: Using SSH client type: native I0221 09:14:58.471422 497077 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49424 } I0221 09:14:58.471446 497077 main.go:130] libmachine: About to run SSH command: if ! grep -xq '.*\sno-preload-20220221091339-6550' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20220221091339-6550/g' /etc/hosts; else echo '127.0.1.1 no-preload-20220221091339-6550' | sudo tee -a /etc/hosts; fi fi I0221 09:14:58.599291 497077 main.go:130] libmachine: SSH cmd err, output: : I0221 09:14:58.599327 497077 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube} I0221 09:14:58.599356 497077 ubuntu.go:177] setting up certificates I0221 09:14:58.599374 497077 provision.go:83] configureAuth start I0221 09:14:58.599432 497077 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220221091339-6550 I0221 09:14:58.635416 497077 provision.go:138] copyHostCerts I0221 09:14:58.635490 497077 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem, removing ... I0221 09:14:58.635505 497077 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem I0221 09:14:58.635587 497077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem (1082 bytes) I0221 09:14:58.635698 497077 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem, removing ... I0221 09:14:58.635723 497077 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem I0221 09:14:58.635763 497077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem (1123 bytes) I0221 09:14:58.635848 497077 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem, removing ... I0221 09:14:58.635861 497077 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem I0221 09:14:58.635891 497077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem (1675 bytes) I0221 09:14:58.636017 497077 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem org=jenkins.no-preload-20220221091339-6550 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20220221091339-6550] I0221 09:14:58.819070 497077 provision.go:172] copyRemoteCerts I0221 09:14:58.819127 497077 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0221 09:14:58.819194 497077 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:14:58.854906 497077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49424 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/no-preload-20220221091339-6550/id_rsa Username:docker} I0221 09:14:58.942893 497077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes) I0221 09:14:58.960791 497077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes) I0221 09:14:58.978476 497077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0221 09:14:58.996794 497077 provision.go:86] duration metric: configureAuth took 397.404469ms I0221 09:14:58.996825 497077 ubuntu.go:193] setting minikube options for container-runtime I0221 09:14:58.997032 497077 config.go:176] Loaded profile config "no-preload-20220221091339-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5-rc.0 I0221 09:14:58.997090 497077 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:14:59.034468 497077 main.go:130] libmachine: Using SSH client type: native I0221 09:14:59.034682 497077 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49424 } I0221 09:14:59.034700 497077 main.go:130] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0221 09:14:59.155226 497077 main.go:130] libmachine: SSH cmd err, output: : overlay I0221 09:14:59.155248 497077 ubuntu.go:71] root file system type: overlay I0221 09:14:59.155392 497077 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0221 09:14:59.155444 497077 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:14:59.192388 497077 main.go:130] libmachine: Using SSH client type: native I0221 09:14:59.192685 497077 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49424 } I0221 09:14:59.192751 497077 main.go:130] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0221 09:14:59.324197 497077 main.go:130] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0221 09:14:59.324270 497077 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:14:59.359849 497077 main.go:130] libmachine: Using SSH client type: native I0221 09:14:59.360033 497077 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a1100] 0x7a41e0 [] 0s} 127.0.0.1 49424 } I0221 09:14:59.360060 497077 main.go:130] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0221 09:14:59.486930 497077 main.go:130] libmachine: SSH cmd err, output: : I0221 09:14:59.486959 497077 machine.go:91] provisioned docker machine in 4.221017657s I0221 09:14:59.486970 497077 start.go:267] post-start starting for "no-preload-20220221091339-6550" (driver="docker") I0221 09:14:59.486977 497077 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0221 09:14:59.487048 497077 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0221 09:14:59.487084 497077 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:14:59.521395 497077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49424 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/no-preload-20220221091339-6550/id_rsa Username:docker} I0221 09:14:59.610807 497077 ssh_runner.go:195] Run: cat /etc/os-release I0221 09:14:59.613656 497077 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0221 09:14:59.613682 497077 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0221 09:14:59.613689 497077 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0221 09:14:59.613693 497077 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0221 09:14:59.613702 497077 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/addons for local assets ... I0221 09:14:59.613745 497077 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files for local assets ... I0221 09:14:59.613805 497077 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem -> 65502.pem in /etc/ssl/certs I0221 09:14:59.613869 497077 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs I0221 09:14:59.620854 497077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /etc/ssl/certs/65502.pem (1708 bytes) I0221 09:14:59.639388 497077 start.go:270] post-start completed in 152.406038ms I0221 09:14:59.639459 497077 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0221 09:14:59.639511 497077 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:14:59.673472 497077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49424 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/no-preload-20220221091339-6550/id_rsa Username:docker} I0221 09:14:59.759445 497077 fix.go:57] fixHost completed within 5.00207894s I0221 09:14:59.759478 497077 start.go:80] releasing machines lock for "no-preload-20220221091339-6550", held for 5.002135289s I0221 09:14:59.759569 497077 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220221091339-6550 I0221 09:14:59.794168 497077 ssh_runner.go:195] Run: systemctl --version I0221 09:14:59.794213 497077 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:14:59.794261 497077 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/ I0221 09:14:59.794323 497077 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220221091339-6550 I0221 09:14:59.830266 497077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49424 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/no-preload-20220221091339-6550/id_rsa Username:docker} I0221 09:14:59.830934 497077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49424 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/no-preload-20220221091339-6550/id_rsa Username:docker} I0221 09:15:00.059614 497077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0221 09:15:00.072126 497077 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 09:15:00.081745 497077 cruntime.go:272] skipping containerd shutdown because we are bound to it I0221 09:15:00.081807 497077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0221 09:15:00.091414 497077 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0221 09:15:00.104576 497077 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0221 09:15:00.185593 497077 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0221 09:15:00.264404 497077 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0221 09:15:00.274607 497077 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0221 09:15:00.356677 497077 ssh_runner.go:195] Run: sudo systemctl start docker I0221 09:15:00.367220 497077 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 09:15:00.408646 497077 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0221 09:15:00.453346 497077 out.go:203] * Preparing Kubernetes v1.23.5-rc.0 on Docker 20.10.12 ... I0221 09:15:00.453433 497077 cli_runner.go:133] Run: docker network inspect no-preload-20220221091339-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0221 09:15:00.490848 497077 ssh_runner.go:195] Run: grep 192.168.67.1 host.minikube.internal$ /etc/hosts I0221 09:15:00.494266 497077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 09:14:59.367833 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:01.866890 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:00.505904 497077 out.go:176] - kubelet.housekeeping-interval=5m I0221 09:15:00.505987 497077 preload.go:132] Checking if preload exists for k8s version v1.23.5-rc.0 and runtime docker I0221 09:15:00.506034 497077 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0221 09:15:00.542443 497077 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.5-rc.0 k8s.gcr.io/kube-proxy:v1.23.5-rc.0 k8s.gcr.io/kube-controller-manager:v1.23.5-rc.0 k8s.gcr.io/kube-scheduler:v1.23.5-rc.0 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 gcr.io/k8s-minikube/busybox:1.28.4-glibc -- /stdout -- I0221 09:15:00.542468 497077 cache_images.go:84] Images are preloaded, skipping loading I0221 09:15:00.542516 497077 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0221 09:15:00.629839 497077 cni.go:93] Creating CNI manager for "" I0221 09:15:00.629866 497077 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0221 09:15:00.629874 497077 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0221 09:15:00.629885 497077 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.5-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20220221091339-6550 NodeName:no-preload-20220221091339-6550 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0221 09:15:00.630008 497077 kubeadm.go:162] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.67.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "no-preload-20220221091339-6550" kubeletExtraArgs: node-ip: 192.168.67.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.67.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.23.5-rc.0 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0221 09:15:00.630090 497077 kubeadm.go:936] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.23.5-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=no-preload-20220221091339-6550 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 [Install] config: {KubernetesVersion:v1.23.5-rc.0 ClusterName:no-preload-20220221091339-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0221 09:15:00.630139 497077 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5-rc.0 I0221 09:15:00.637685 497077 binaries.go:44] Found k8s binaries, skipping transfer I0221 09:15:00.637764 497077 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0221 09:15:00.644789 497077 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes) I0221 09:15:00.657982 497077 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes) I0221 09:15:00.670742 497077 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2057 bytes) I0221 09:15:00.684208 497077 ssh_runner.go:195] Run: grep 192.168.67.2 control-plane.minikube.internal$ /etc/hosts I0221 09:15:00.687208 497077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0221 09:15:00.696515 497077 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550 for IP: 192.168.67.2 I0221 09:15:00.696618 497077 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key I0221 09:15:00.696661 497077 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key I0221 09:15:00.696755 497077 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/client.key I0221 09:15:00.696832 497077 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/apiserver.key.c7fa3a9e I0221 09:15:00.696886 497077 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/proxy-client.key I0221 09:15:00.697009 497077 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem (1338 bytes) W0221 09:15:00.697050 497077 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550_empty.pem, impossibly tiny 0 bytes I0221 09:15:00.697065 497077 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem (1679 bytes) I0221 09:15:00.697098 497077 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem (1082 bytes) I0221 09:15:00.697131 497077 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem (1123 bytes) I0221 09:15:00.697164 497077 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem (1675 bytes) I0221 09:15:00.697218 497077 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem (1708 bytes) I0221 09:15:00.698143 497077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0221 09:15:00.715811 497077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0221 09:15:00.733265 497077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0221 09:15:00.750977 497077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I0221 09:15:00.769398 497077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0221 09:15:00.788563 497077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0221 09:15:00.806153 497077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0221 09:15:00.823360 497077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0221 09:15:00.841202 497077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0221 09:15:00.858966 497077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem --> /usr/share/ca-certificates/6550.pem (1338 bytes) I0221 09:15:00.877291 497077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /usr/share/ca-certificates/65502.pem (1708 bytes) I0221 09:15:00.894966 497077 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0221 09:15:00.907784 497077 ssh_runner.go:195] Run: openssl version I0221 09:15:00.912646 497077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0221 09:15:00.920199 497077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0221 09:15:00.923468 497077 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb 21 08:26 /usr/share/ca-certificates/minikubeCA.pem I0221 09:15:00.923522 497077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0221 09:15:00.928412 497077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0221 09:15:00.935630 497077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6550.pem && ln -fs /usr/share/ca-certificates/6550.pem /etc/ssl/certs/6550.pem" I0221 09:15:00.943451 497077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6550.pem I0221 09:15:00.946441 497077 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Feb 21 08:30 /usr/share/ca-certificates/6550.pem I0221 09:15:00.946486 497077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6550.pem I0221 09:15:00.951550 497077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6550.pem /etc/ssl/certs/51391683.0" I0221 09:15:00.958531 497077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65502.pem && ln -fs /usr/share/ca-certificates/65502.pem /etc/ssl/certs/65502.pem" I0221 09:15:00.966088 497077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65502.pem I0221 09:15:00.969339 497077 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Feb 21 08:30 /usr/share/ca-certificates/65502.pem I0221 09:15:00.969381 497077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65502.pem I0221 09:15:00.974253 497077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65502.pem /etc/ssl/certs/3ec20f2e.0" I0221 09:15:00.981331 497077 kubeadm.go:391] StartCluster: {Name:no-preload-20220221091339-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5-rc.0 ClusterName:no-preload-20220221091339-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.5-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0221 09:15:00.981480 497077 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0221 09:15:01.015677 497077 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0221 09:15:01.023263 497077 kubeadm.go:402] found existing configuration files, will attempt cluster restart I0221 09:15:01.023291 497077 kubeadm.go:601] restartCluster start I0221 09:15:01.023336 497077 ssh_runner.go:195] Run: sudo test -d /data/minikube I0221 09:15:01.030275 497077 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1 stdout: stderr: I0221 09:15:01.031227 497077 kubeconfig.go:116] verify returned: extract IP: "no-preload-20220221091339-6550" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 09:15:01.031637 497077 kubeconfig.go:127] "no-preload-20220221091339-6550" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig - will repair! I0221 09:15:01.032422 497077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig: {Name:mkbaf7b3f4ffbe8a9d57707d423380523fa909e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:15:01.035063 497077 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new I0221 09:15:01.042293 497077 api_server.go:165] Checking apiserver status ... I0221 09:15:01.042341 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:15:01.057108 497077 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:15:01.257508 497077 api_server.go:165] Checking apiserver status ... I0221 09:15:01.257589 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:15:01.272689 497077 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:15:01.457920 497077 api_server.go:165] Checking apiserver status ... I0221 09:15:01.458008 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:15:01.472380 497077 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:15:01.657667 497077 api_server.go:165] Checking apiserver status ... I0221 09:15:01.657749 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:15:01.671846 497077 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:15:01.858121 497077 api_server.go:165] Checking apiserver status ... I0221 09:15:01.858197 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:15:01.873219 497077 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:15:02.057537 497077 api_server.go:165] Checking apiserver status ... I0221 09:15:02.057621 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:15:02.071822 497077 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:15:02.258142 497077 api_server.go:165] Checking apiserver status ... I0221 09:15:02.258214 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:15:02.272579 497077 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:15:02.457275 497077 api_server.go:165] Checking apiserver status ... I0221 09:15:02.457349 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:15:02.471283 497077 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:15:02.657420 497077 api_server.go:165] Checking apiserver status ... I0221 09:15:02.657492 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:15:02.673234 497077 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:15:02.857338 497077 api_server.go:165] Checking apiserver status ... I0221 09:15:02.857406 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:15:02.872086 497077 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:15:03.057333 497077 api_server.go:165] Checking apiserver status ... I0221 09:15:03.057406 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:15:03.072150 497077 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:15:03.257375 497077 api_server.go:165] Checking apiserver status ... I0221 09:15:03.257455 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:15:03.271764 497077 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:15:03.458080 497077 api_server.go:165] Checking apiserver status ... I0221 09:15:03.458143 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:15:03.472368 497077 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:15:03.657605 497077 api_server.go:165] Checking apiserver status ... I0221 09:15:03.657670 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:15:03.671895 497077 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:15:03.857252 497077 api_server.go:165] Checking apiserver status ... I0221 09:15:03.857342 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:15:03.872788 497077 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:15:04.058092 497077 api_server.go:165] Checking apiserver status ... I0221 09:15:04.058182 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:15:04.073466 497077 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:15:04.073487 497077 api_server.go:165] Checking apiserver status ... I0221 09:15:04.073535 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0221 09:15:04.087811 497077 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0221 09:15:04.087838 497077 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition I0221 09:15:04.087845 497077 kubeadm.go:1067] stopping kube-system containers ... I0221 09:15:04.087896 497077 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0221 09:15:04.127206 497077 docker.go:438] Stopping containers: [5d15ca256109 347a4215d0ae c0c4b379d5e5 f7181f1f8daf 7a5f5b44b56f 8276accbaf09 8a563c0a42c4 f9f5c7cf75f7 ae45a8000b2b b955dacc6170 326ecf4c809c f643ab14017c 3a10ec39e5a4 8404008f7aea 1eba7820624f] I0221 09:15:04.127280 497077 ssh_runner.go:195] Run: docker stop 5d15ca256109 347a4215d0ae c0c4b379d5e5 f7181f1f8daf 7a5f5b44b56f 8276accbaf09 8a563c0a42c4 f9f5c7cf75f7 ae45a8000b2b b955dacc6170 326ecf4c809c f643ab14017c 3a10ec39e5a4 8404008f7aea 1eba7820624f I0221 09:15:04.165650 497077 ssh_runner.go:195] Run: sudo systemctl stop kubelet I0221 09:15:04.176836 497077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0221 09:15:04.184036 497077 kubeadm.go:155] found existing configuration files: -rw------- 1 root root 5643 Feb 21 09:14 /etc/kubernetes/admin.conf -rw------- 1 root root 5652 Feb 21 09:14 /etc/kubernetes/controller-manager.conf -rw------- 1 root root 2059 Feb 21 09:14 /etc/kubernetes/kubelet.conf -rw------- 1 root root 5604 Feb 21 09:14 /etc/kubernetes/scheduler.conf I0221 09:15:04.184085 497077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf I0221 09:15:04.191169 497077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf I0221 09:15:04.198275 497077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf I0221 09:15:04.205826 497077 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1 stdout: stderr: I0221 09:15:04.205882 497077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf I0221 09:15:04.212938 497077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf I0221 09:15:04.221017 497077 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1 stdout: stderr: I0221 09:15:04.221073 497077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf I0221 09:15:04.228660 497077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0221 09:15:04.237858 497077 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml I0221 09:15:04.237919 497077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml" I0221 09:15:04.283303 497077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml" I0221 09:15:04.366796 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:06.366910 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:05.185878 497077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml" I0221 09:15:05.354047 497077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml" I0221 09:15:05.406771 497077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml" I0221 09:15:05.462979 497077 api_server.go:51] waiting for apiserver process to appear ... I0221 09:15:05.463083 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0221 09:15:05.979284 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0221 09:15:06.478843 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0221 09:15:06.978709 497077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0221 09:15:07.023275 497077 api_server.go:71] duration metric: took 1.56029598s to wait for apiserver process to appear ... I0221 09:15:07.023310 497077 api_server.go:87] waiting for apiserver healthz status ... I0221 09:15:07.023323 497077 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ... I0221 09:15:09.932898 495766 out.go:203] - Generating certificates and keys ... I0221 09:15:09.935933 495766 out.go:203] - Booting up control plane ... I0221 09:15:08.367398 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:10.367690 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:12.867151 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:09.939117 495766 out.go:203] - Configuring RBAC rules ... I0221 09:15:09.941894 495766 cni.go:93] Creating CNI manager for "" I0221 09:15:09.941923 495766 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0221 09:15:09.941953 495766 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0221 09:15:09.942114 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:09.942193 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=9a4c39b14752448ee8b5da7f8bb397c2a16d9ea9 minikube.k8s.io/name=embed-certs-20220221091443-6550 minikube.k8s.io/updated_at=2022_02_21T09_15_09_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:10.395924 495766 ops.go:34] apiserver oom_adj: -16 I0221 09:15:10.396025 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:10.972819 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:11.472464 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:11.972445 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:12.472331 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:12.973190 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:13.473103 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:10.288934 497077 api_server.go:266] https://192.168.67.2:8443/healthz returned 403: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} W0221 09:15:10.288963 497077 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 403: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} I0221 09:15:10.789177 497077 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ... I0221 09:15:10.794490 497077 api_server.go:266] https://192.168.67.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed W0221 09:15:10.794516 497077 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed I0221 09:15:11.290065 497077 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ... I0221 09:15:11.294697 497077 api_server.go:266] https://192.168.67.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed W0221 09:15:11.294728 497077 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed I0221 09:15:11.789231 497077 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ... I0221 09:15:11.806954 497077 api_server.go:266] https://192.168.67.2:8443/healthz returned 200: ok I0221 09:15:11.814884 497077 api_server.go:140] control plane version: v1.23.5-rc.0 I0221 09:15:11.814957 497077 api_server.go:130] duration metric: took 4.791639219s to wait for apiserver health ... I0221 09:15:11.814979 497077 cni.go:93] Creating CNI manager for "" I0221 09:15:11.815050 497077 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0221 09:15:11.815064 497077 system_pods.go:43] waiting for kube-system pods to appear ... I0221 09:15:11.828697 497077 system_pods.go:59] 8 kube-system pods found I0221 09:15:11.828740 497077 system_pods.go:61] "coredns-64897985d-t6lcp" [f53edf0b-bb68-4aa6-83d9-8e1a356dfda9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0221 09:15:11.828750 497077 system_pods.go:61] "etcd-no-preload-20220221091339-6550" [3fa24a94-f41c-4cf8-bcb9-4808033aef36] Running I0221 09:15:11.828763 497077 system_pods.go:61] "kube-apiserver-no-preload-20220221091339-6550" [e30a563f-2bc9-4894-9a2d-f87bdb6c96be] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver]) I0221 09:15:11.828773 497077 system_pods.go:61] "kube-controller-manager-no-preload-20220221091339-6550" [7300b4d8-82d8-49ca-b581-7e909cb1917e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager]) I0221 09:15:11.828788 497077 system_pods.go:61] "kube-proxy-hlrh9" [751f88ad-01cb-49a0-947b-a4213748c80e] Running I0221 09:15:11.828795 497077 system_pods.go:61] "kube-scheduler-no-preload-20220221091339-6550" [fbffbf1c-d65f-4cbe-bea6-80bc5da28f8a] Running I0221 09:15:11.828804 497077 system_pods.go:61] "metrics-server-7f49dcbd7-4tqkf" [7f53f035-82f2-4a85-a0ca-dba360593f86] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server]) I0221 09:15:11.828815 497077 system_pods.go:61] "storage-provisioner" [6312fa75-6c0a-4240-b8cf-9dacbb061fa7] Running I0221 09:15:11.828822 497077 system_pods.go:74] duration metric: took 13.746908ms to wait for pod list to return data ... I0221 09:15:11.828836 497077 node_conditions.go:102] verifying NodePressure condition ... I0221 09:15:11.833876 497077 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki I0221 09:15:11.833905 497077 node_conditions.go:123] node cpu capacity is 8 I0221 09:15:11.833915 497077 node_conditions.go:105] duration metric: took 5.074671ms to run NodePressure ... I0221 09:15:11.833934 497077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml" I0221 09:15:12.328996 497077 kubeadm.go:737] waiting for restarted kubelet to initialise ... I0221 09:15:12.333535 497077 kubeadm.go:752] kubelet initialised I0221 09:15:12.333563 497077 kubeadm.go:753] duration metric: took 4.536476ms waiting for restarted kubelet to initialise ... I0221 09:15:12.333572 497077 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 09:15:12.339145 497077 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-t6lcp" in "kube-system" namespace to be "Ready" ... I0221 09:15:14.867238 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:17.367114 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:13.972289 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:14.473131 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:14.972560 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:15.472492 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:15.972292 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:16.473142 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:16.972837 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:17.472652 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:17.972922 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:18.473047 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:14.417136 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:16.417642 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:18.418101 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:19.866690 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:21.866732 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:18.972808 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:19.473175 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:19.972502 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:20.472258 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:20.972822 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:21.472280 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:21.972878 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:22.472293 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:22.972455 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:23.472236 495766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:15:23.546435 495766 kubeadm.go:1020] duration metric: took 13.604362704s to wait for elevateKubeSystemPrivileges. I0221 09:15:23.546462 495766 kubeadm.go:393] StartCluster complete in 25.292374548s I0221 09:15:23.546476 495766 settings.go:142] acquiring lock: {Name:mk4400923ef35d7d80e21aa000bc7683aef0fb09 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:15:23.546590 495766 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 09:15:23.548292 495766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig: {Name:mkbaf7b3f4ffbe8a9d57707d423380523fa909e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:15:24.065153 495766 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220221091443-6550" rescaled to 1 I0221 09:15:24.065200 495766 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true} I0221 09:15:24.067480 495766 out.go:176] * Verifying Kubernetes components... I0221 09:15:24.067530 495766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0221 09:15:24.065290 495766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0221 09:15:24.065299 495766 addons.go:415] enableAddons start: toEnable=map[], additional=[] I0221 09:15:24.065489 495766 config.go:176] Loaded profile config "embed-certs-20220221091443-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4 I0221 09:15:24.067675 495766 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220221091443-6550" I0221 09:15:24.067689 495766 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220221091443-6550" I0221 09:15:24.067661 495766 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220221091443-6550" I0221 09:15:24.067737 495766 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220221091443-6550" W0221 09:15:24.067746 495766 addons.go:165] addon storage-provisioner should already be in state true I0221 09:15:24.067776 495766 host.go:66] Checking if "embed-certs-20220221091443-6550" exists ... I0221 09:15:24.068065 495766 cli_runner.go:133] Run: docker container inspect embed-certs-20220221091443-6550 --format={{.State.Status}} I0221 09:15:24.068216 495766 cli_runner.go:133] Run: docker container inspect embed-certs-20220221091443-6550 --format={{.State.Status}} I0221 09:15:20.419719 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:22.917623 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:24.117721 495766 out.go:176] - Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0221 09:15:24.117845 495766 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml I0221 09:15:24.117859 495766 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0221 09:15:24.117912 495766 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220221091443-6550 I0221 09:15:24.119141 495766 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220221091443-6550" W0221 09:15:24.119159 495766 addons.go:165] addon default-storageclass should already be in state true I0221 09:15:24.119181 495766 host.go:66] Checking if "embed-certs-20220221091443-6550" exists ... I0221 09:15:24.119561 495766 cli_runner.go:133] Run: docker container inspect embed-certs-20220221091443-6550 --format={{.State.Status}} I0221 09:15:24.150163 495766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.58.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0221 09:15:24.152277 495766 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220221091443-6550" to be "Ready" ... I0221 09:15:24.156494 495766 node_ready.go:49] node "embed-certs-20220221091443-6550" has status "Ready":"True" I0221 09:15:24.156521 495766 node_ready.go:38] duration metric: took 4.208371ms waiting for node "embed-certs-20220221091443-6550" to be "Ready" ... I0221 09:15:24.156533 495766 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 09:15:24.165003 495766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49419 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/embed-certs-20220221091443-6550/id_rsa Username:docker} I0221 09:15:24.168204 495766 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-2pl94" in "kube-system" namespace to be "Ready" ... I0221 09:15:24.168723 495766 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml I0221 09:15:24.168748 495766 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0221 09:15:24.168801 495766 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220221091443-6550 I0221 09:15:24.203690 495766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49419 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/embed-certs-20220221091443-6550/id_rsa Username:docker} I0221 09:15:24.363020 495766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0221 09:15:24.408664 495766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0221 09:15:25.610357 495766 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.58.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.460154783s) I0221 09:15:25.610390 495766 start.go:777] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS I0221 09:15:25.625315 495766 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.262256298s) I0221 09:15:25.710944 495766 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.302239283s) I0221 09:15:23.867202 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:26.367087 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:25.713479 495766 out.go:176] * Enabled addons: default-storageclass, storage-provisioner I0221 09:15:25.713504 495766 addons.go:417] enableAddons completed in 1.648214398s I0221 09:15:26.228068 495766 pod_ready.go:102] pod "coredns-64897985d-2pl94" in "kube-system" namespace has status "Ready":"False" I0221 09:15:26.727622 495766 pod_ready.go:92] pod "coredns-64897985d-2pl94" in "kube-system" namespace has status "Ready":"True" I0221 09:15:26.727655 495766 pod_ready.go:81] duration metric: took 2.559423348s waiting for pod "coredns-64897985d-2pl94" in "kube-system" namespace to be "Ready" ... I0221 09:15:26.727667 495766 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-rcbll" in "kube-system" namespace to be "Ready" ... I0221 09:15:28.737549 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:15:24.919389 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:27.417293 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:28.866946 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:31.367455 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:31.237326 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:15:33.737721 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:15:29.418032 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:31.418185 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:33.418410 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:33.867224 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:36.366594 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:36.237918 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:15:38.737249 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:15:35.918507 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:38.417749 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:38.866204 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:40.866820 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:42.867161 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:40.738215 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:15:43.238090 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:15:40.418302 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:42.917405 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:45.366433 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:47.366782 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:45.737608 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:15:47.737755 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:15:44.918091 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:46.918152 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:49.367183 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:51.866295 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:50.237684 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:15:52.237829 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:15:49.418166 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:51.917568 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:53.917904 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:53.866557 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:55.867219 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:54.737606 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:15:57.237432 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:15:56.417829 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:58.918097 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:15:58.367535 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:00.866365 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:15:59.237796 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:01.738167 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:00.919349 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:03.417894 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:03.367494 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:05.367891 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:07.867050 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:04.237059 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:06.237917 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:08.238619 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:05.917722 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:07.918337 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:09.867221 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:12.366939 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:10.737952 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:13.236998 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:09.918793 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:12.418654 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:14.867080 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:17.367523 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:15.237467 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:17.237838 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:14.918524 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:17.418433 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:19.866160 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:21.866878 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:19.737940 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:22.237799 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:19.418605 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:21.917158 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:23.917241 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:24.366585 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:26.366639 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:24.737496 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:27.237354 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:25.918347 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:28.417343 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:28.866983 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:31.367058 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:29.237595 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:31.737862 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:30.417414 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:32.417980 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:33.367175 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:35.367281 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:37.866816 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:34.237755 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:36.737064 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:38.738243 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:34.418103 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:36.917322 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:38.918118 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:40.367242 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:42.867084 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:41.238236 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:43.737080 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:41.418017 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:43.418695 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:44.867117 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:47.367137 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:45.737106 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:48.237075 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:45.918081 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:48.417811 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:49.867322 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:52.366506 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:50.237826 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:52.737823 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:50.917555 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:52.919350 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:54.867538 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:56.867663 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:55.237168 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:57.737187 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:16:55.418364 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:57.918467 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:16:59.367240 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:17:01.867170 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:16:59.737493 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:02.237245 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:00.418173 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:02.418722 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:03.867259 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:17:06.366958 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:17:04.237795 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:06.737122 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:08.737756 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:04.917358 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:07.420503 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:08.367111 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:17:10.866482 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:17:11.236681 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:13.237115 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:09.917193 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:11.917715 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:13.366478 481686 pod_ready.go:102] pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace has status "Ready":"False" I0221 09:17:15.362217 481686 pod_ready.go:81] duration metric: took 4m0.400042148s waiting for pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace to be "Ready" ... E0221 09:17:15.362243 481686 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5b7b789f-jr69t" in "kube-system" namespace to be "Ready" (will not retry!) I0221 09:17:15.362281 481686 pod_ready.go:38] duration metric: took 4m1.599876939s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 09:17:15.362311 481686 kubeadm.go:605] restartCluster took 4m50.480983318s W0221 09:17:15.362454 481686 out.go:241] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" I0221 09:17:15.362498 481686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force" I0221 09:17:15.237295 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:17.737915 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:18.175209 481686 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (2.812684621s) I0221 09:17:18.175276 481686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0221 09:17:18.185025 481686 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0221 09:17:18.192447 481686 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver I0221 09:17:18.192507 481686 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0221 09:17:18.199480 481686 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0221 09:17:18.199532 481686 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0221 09:17:14.418203 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:16.418817 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:18.918397 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:19.004199 481686 out.go:203] - Generating certificates and keys ... I0221 09:17:20.154486 481686 out.go:203] - Booting up control plane ... I0221 09:17:19.738185 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:22.237886 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:20.919065 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:23.417789 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:24.238025 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:26.736754 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:28.737285 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:25.418974 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:27.917549 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:29.697817 481686 out.go:203] - Configuring RBAC rules ... I0221 09:17:30.117382 481686 cni.go:93] Creating CNI manager for "" I0221 09:17:30.117409 481686 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0221 09:17:30.117456 481686 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0221 09:17:30.117488 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:30.117513 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=9a4c39b14752448ee8b5da7f8bb397c2a16d9ea9 minikube.k8s.io/name=old-k8s-version-20220221090948-6550 minikube.k8s.io/updated_at=2022_02_21T09_17_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:30.138076 481686 ops.go:34] apiserver oom_adj: -16 I0221 09:17:30.332701 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:30.962889 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:31.463137 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:31.963106 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:32.462475 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:32.962808 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:30.737617 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:32.737649 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:30.418236 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:32.418744 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:33.463336 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:33.963223 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:34.462476 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:34.963309 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:35.462739 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:35.962494 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:36.462808 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:36.962302 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:37.463170 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:37.962954 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:35.237642 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:37.737932 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:34.917465 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:36.918090 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:38.918163 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:38.462353 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:38.962334 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:39.462945 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:39.963254 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:40.462471 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:40.962357 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:41.463268 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:41.962492 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:42.462438 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:42.962696 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:40.237611 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:42.737701 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:41.417277 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:43.417810 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:43.462358 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:43.963162 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:44.462847 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:44.962328 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:45.462708 481686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0221 09:17:45.543971 481686 kubeadm.go:1020] duration metric: took 15.42652334s to wait for elevateKubeSystemPrivileges. I0221 09:17:45.544001 481686 kubeadm.go:393] StartCluster complete in 5m20.703452161s I0221 09:17:45.544025 481686 settings.go:142] acquiring lock: {Name:mk4400923ef35d7d80e21aa000bc7683aef0fb09 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:17:45.544116 481686 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig I0221 09:17:45.545695 481686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig: {Name:mkbaf7b3f4ffbe8a9d57707d423380523fa909e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0221 09:17:46.064567 481686 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20220221090948-6550" rescaled to 1 I0221 09:17:46.064648 481686 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true} I0221 09:17:46.064685 481686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0221 09:17:46.066864 481686 out.go:176] * Verifying Kubernetes components... I0221 09:17:46.064756 481686 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[] I0221 09:17:46.067046 481686 addons.go:65] Setting storage-provisioner=true in profile "old-k8s-version-20220221090948-6550" I0221 09:17:46.067065 481686 addons.go:65] Setting default-storageclass=true in profile "old-k8s-version-20220221090948-6550" I0221 09:17:46.067077 481686 addons.go:153] Setting addon storage-provisioner=true in "old-k8s-version-20220221090948-6550" I0221 09:17:46.067078 481686 addons.go:65] Setting metrics-server=true in profile "old-k8s-version-20220221090948-6550" I0221 09:17:46.067091 481686 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20220221090948-6550" I0221 09:17:46.067051 481686 addons.go:65] Setting dashboard=true in profile "old-k8s-version-20220221090948-6550" I0221 09:17:46.067104 481686 addons.go:153] Setting addon metrics-server=true in "old-k8s-version-20220221090948-6550" W0221 09:17:46.067120 481686 addons.go:165] addon metrics-server should already be in state true I0221 09:17:46.067154 481686 host.go:66] Checking if "old-k8s-version-20220221090948-6550" exists ... W0221 09:17:46.067091 481686 addons.go:165] addon storage-provisioner should already be in state true I0221 09:17:46.067245 481686 host.go:66] Checking if "old-k8s-version-20220221090948-6550" exists ... I0221 09:17:46.067105 481686 addons.go:153] Setting addon dashboard=true in "old-k8s-version-20220221090948-6550" W0221 09:17:46.067344 481686 addons.go:165] addon dashboard should already be in state true I0221 09:17:46.067383 481686 host.go:66] Checking if "old-k8s-version-20220221090948-6550" exists ... I0221 09:17:46.064932 481686 config.go:176] Loaded profile config "old-k8s-version-20220221090948-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0 I0221 09:17:46.067445 481686 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220221090948-6550 --format={{.State.Status}} I0221 09:17:46.066931 481686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0221 09:17:46.067658 481686 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220221090948-6550 --format={{.State.Status}} I0221 09:17:46.067687 481686 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220221090948-6550 --format={{.State.Status}} I0221 09:17:46.067864 481686 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220221090948-6550 --format={{.State.Status}} I0221 09:17:46.114243 481686 out.go:176] - Using image kubernetesui/dashboard:v2.3.1 I0221 09:17:46.115639 481686 out.go:176] - Using image k8s.gcr.io/echoserver:1.4 I0221 09:17:46.115714 481686 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml I0221 09:17:46.115730 481686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes) I0221 09:17:46.115782 481686 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220221090948-6550 I0221 09:17:46.117743 481686 out.go:176] - Using image fake.domain/k8s.gcr.io/echoserver:1.4 I0221 09:17:46.117801 481686 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml I0221 09:17:46.117809 481686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes) I0221 09:17:46.117855 481686 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220221090948-6550 I0221 09:17:46.125475 481686 out.go:176] - Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0221 09:17:46.125624 481686 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml I0221 09:17:46.125645 481686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0221 09:17:46.125705 481686 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220221090948-6550 I0221 09:17:46.135556 481686 addons.go:153] Setting addon default-storageclass=true in "old-k8s-version-20220221090948-6550" W0221 09:17:46.135586 481686 addons.go:165] addon default-storageclass should already be in state true I0221 09:17:46.135615 481686 host.go:66] Checking if "old-k8s-version-20220221090948-6550" exists ... I0221 09:17:46.136085 481686 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220221090948-6550 --format={{.State.Status}} I0221 09:17:46.166437 481686 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20220221090948-6550" to be "Ready" ... I0221 09:17:46.166456 481686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0221 09:17:46.167667 481686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49409 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/old-k8s-version-20220221090948-6550/id_rsa Username:docker} I0221 09:17:46.170013 481686 node_ready.go:49] node "old-k8s-version-20220221090948-6550" has status "Ready":"True" I0221 09:17:46.170038 481686 node_ready.go:38] duration metric: took 3.562771ms waiting for node "old-k8s-version-20220221090948-6550" to be "Ready" ... I0221 09:17:46.170049 481686 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 09:17:46.173374 481686 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-b7jrq" in "kube-system" namespace to be "Ready" ... I0221 09:17:46.180921 481686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49409 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/old-k8s-version-20220221090948-6550/id_rsa Username:docker} I0221 09:17:46.183084 481686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49409 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/old-k8s-version-20220221090948-6550/id_rsa Username:docker} I0221 09:17:46.187194 481686 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml I0221 09:17:46.187215 481686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0221 09:17:46.187277 481686 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220221090948-6550 I0221 09:17:46.237820 481686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49409 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/old-k8s-version-20220221090948-6550/id_rsa Username:docker} I0221 09:17:46.322666 481686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0221 09:17:46.323751 481686 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml I0221 09:17:46.323772 481686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes) I0221 09:17:46.323802 481686 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml I0221 09:17:46.323821 481686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes) I0221 09:17:46.414165 481686 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml I0221 09:17:46.414190 481686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes) I0221 09:17:46.416636 481686 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml I0221 09:17:46.416744 481686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes) I0221 09:17:46.432835 481686 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml I0221 09:17:46.432863 481686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes) I0221 09:17:46.506794 481686 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml I0221 09:17:46.506871 481686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes) I0221 09:17:46.514112 481686 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml I0221 09:17:46.514139 481686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes) I0221 09:17:46.523096 481686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0221 09:17:46.527550 481686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml I0221 09:17:46.532049 481686 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml I0221 09:17:46.532075 481686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes) I0221 09:17:46.617605 481686 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml I0221 09:17:46.617639 481686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes) I0221 09:17:46.704195 481686 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml I0221 09:17:46.704223 481686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes) I0221 09:17:46.726689 481686 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml I0221 09:17:46.726721 481686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes) I0221 09:17:46.825437 481686 start.go:777] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS I0221 09:17:46.831248 481686 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml I0221 09:17:46.831280 481686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes) I0221 09:17:46.921668 481686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml I0221 09:17:47.620931 481686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.093333198s) I0221 09:17:47.620975 481686 addons.go:386] Verifying addon metrics-server=true in "old-k8s-version-20220221090948-6550" I0221 09:17:48.116573 481686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.194843902s) I0221 09:17:45.237588 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:47.237842 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:45.918077 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:47.920821 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:48.118737 481686 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard I0221 09:17:48.118777 481686 addons.go:417] enableAddons completed in 2.054027114s I0221 09:17:48.206323 481686 pod_ready.go:102] pod "coredns-5644d7b6d9-b7jrq" in "kube-system" namespace has status "Ready":"False" I0221 09:17:50.707111 481686 pod_ready.go:102] pod "coredns-5644d7b6d9-b7jrq" in "kube-system" namespace has status "Ready":"False" I0221 09:17:49.238934 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:51.738845 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:50.417131 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:52.418765 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:53.207204 481686 pod_ready.go:102] pod "coredns-5644d7b6d9-b7jrq" in "kube-system" namespace has status "Ready":"False" I0221 09:17:55.683856 481686 pod_ready.go:102] pod "coredns-5644d7b6d9-b7jrq" in "kube-system" namespace has status "Ready":"False" I0221 09:17:57.683807 481686 pod_ready.go:92] pod "coredns-5644d7b6d9-b7jrq" in "kube-system" namespace has status "Ready":"True" I0221 09:17:57.683836 481686 pod_ready.go:81] duration metric: took 11.510430517s waiting for pod "coredns-5644d7b6d9-b7jrq" in "kube-system" namespace to be "Ready" ... I0221 09:17:57.683845 481686 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bmrhh" in "kube-system" namespace to be "Ready" ... I0221 09:17:57.687261 481686 pod_ready.go:92] pod "kube-proxy-bmrhh" in "kube-system" namespace has status "Ready":"True" I0221 09:17:57.687281 481686 pod_ready.go:81] duration metric: took 3.430535ms waiting for pod "kube-proxy-bmrhh" in "kube-system" namespace to be "Ready" ... I0221 09:17:57.687289 481686 pod_ready.go:38] duration metric: took 11.517225526s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0221 09:17:57.687334 481686 api_server.go:51] waiting for apiserver process to appear ... I0221 09:17:57.687382 481686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0221 09:17:57.711089 481686 api_server.go:71] duration metric: took 11.646398188s to wait for apiserver process to appear ... I0221 09:17:57.711122 481686 api_server.go:87] waiting for apiserver healthz status ... I0221 09:17:57.711138 481686 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0221 09:17:57.715750 481686 api_server.go:266] https://192.168.49.2:8443/healthz returned 200: ok I0221 09:17:57.716530 481686 api_server.go:140] control plane version: v1.16.0 I0221 09:17:57.716553 481686 api_server.go:130] duration metric: took 5.42444ms to wait for apiserver health ... I0221 09:17:57.716562 481686 system_pods.go:43] waiting for kube-system pods to appear ... I0221 09:17:57.719359 481686 system_pods.go:59] 4 kube-system pods found I0221 09:17:57.719387 481686 system_pods.go:61] "coredns-5644d7b6d9-b7jrq" [adbb6ee0-5428-4641-a9ce-e84924c25533] Running I0221 09:17:57.719393 481686 system_pods.go:61] "kube-proxy-bmrhh" [a1f01d6d-fff3-436b-bbae-7301b4e82f06] Running I0221 09:17:57.719403 481686 system_pods.go:61] "metrics-server-5b7b789f-dghgt" [ba56eb03-4694-4263-afb9-766698a51a11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server]) I0221 09:17:57.719412 481686 system_pods.go:61] "storage-provisioner" [7ca3bdcb-62ac-48f8-98f6-688c160c889a] Running I0221 09:17:57.719418 481686 system_pods.go:74] duration metric: took 2.851415ms to wait for pod list to return data ... I0221 09:17:57.719431 481686 default_sa.go:34] waiting for default service account to be created ... I0221 09:17:57.721393 481686 default_sa.go:45] found service account: "default" I0221 09:17:57.721412 481686 default_sa.go:55] duration metric: took 1.97454ms for default service account to be created ... I0221 09:17:57.721418 481686 system_pods.go:116] waiting for k8s-apps to be running ... I0221 09:17:57.723938 481686 system_pods.go:86] 4 kube-system pods found I0221 09:17:57.723960 481686 system_pods.go:89] "coredns-5644d7b6d9-b7jrq" [adbb6ee0-5428-4641-a9ce-e84924c25533] Running I0221 09:17:57.723967 481686 system_pods.go:89] "kube-proxy-bmrhh" [a1f01d6d-fff3-436b-bbae-7301b4e82f06] Running I0221 09:17:57.723974 481686 system_pods.go:89] "metrics-server-5b7b789f-dghgt" [ba56eb03-4694-4263-afb9-766698a51a11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server]) I0221 09:17:57.723978 481686 system_pods.go:89] "storage-provisioner" [7ca3bdcb-62ac-48f8-98f6-688c160c889a] Running I0221 09:17:57.723994 481686 retry.go:31] will retry after 214.282984ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler I0221 09:17:57.942423 481686 system_pods.go:86] 4 kube-system pods found I0221 09:17:57.942454 481686 system_pods.go:89] "coredns-5644d7b6d9-b7jrq" [adbb6ee0-5428-4641-a9ce-e84924c25533] Running I0221 09:17:57.942462 481686 system_pods.go:89] "kube-proxy-bmrhh" [a1f01d6d-fff3-436b-bbae-7301b4e82f06] Running I0221 09:17:57.942472 481686 system_pods.go:89] "metrics-server-5b7b789f-dghgt" [ba56eb03-4694-4263-afb9-766698a51a11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server]) I0221 09:17:57.942478 481686 system_pods.go:89] "storage-provisioner" [7ca3bdcb-62ac-48f8-98f6-688c160c889a] Running I0221 09:17:57.942495 481686 retry.go:31] will retry after 293.852686ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler I0221 09:17:54.238088 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:56.737044 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:54.917994 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:56.918138 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:17:58.239424 481686 system_pods.go:86] 4 kube-system pods found I0221 09:17:58.239452 481686 system_pods.go:89] "coredns-5644d7b6d9-b7jrq" [adbb6ee0-5428-4641-a9ce-e84924c25533] Running I0221 09:17:58.239456 481686 system_pods.go:89] "kube-proxy-bmrhh" [a1f01d6d-fff3-436b-bbae-7301b4e82f06] Running I0221 09:17:58.239463 481686 system_pods.go:89] "metrics-server-5b7b789f-dghgt" [ba56eb03-4694-4263-afb9-766698a51a11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server]) I0221 09:17:58.239468 481686 system_pods.go:89] "storage-provisioner" [7ca3bdcb-62ac-48f8-98f6-688c160c889a] Running I0221 09:17:58.239483 481686 retry.go:31] will retry after 355.089487ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler I0221 09:17:58.598172 481686 system_pods.go:86] 4 kube-system pods found I0221 09:17:58.598203 481686 system_pods.go:89] "coredns-5644d7b6d9-b7jrq" [adbb6ee0-5428-4641-a9ce-e84924c25533] Running I0221 09:17:58.598209 481686 system_pods.go:89] "kube-proxy-bmrhh" [a1f01d6d-fff3-436b-bbae-7301b4e82f06] Running I0221 09:17:58.598218 481686 system_pods.go:89] "metrics-server-5b7b789f-dghgt" [ba56eb03-4694-4263-afb9-766698a51a11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server]) I0221 09:17:58.598225 481686 system_pods.go:89] "storage-provisioner" [7ca3bdcb-62ac-48f8-98f6-688c160c889a] Running I0221 09:17:58.598247 481686 retry.go:31] will retry after 480.685997ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler I0221 09:17:59.083302 481686 system_pods.go:86] 4 kube-system pods found I0221 09:17:59.083333 481686 system_pods.go:89] "coredns-5644d7b6d9-b7jrq" [adbb6ee0-5428-4641-a9ce-e84924c25533] Running I0221 09:17:59.083338 481686 system_pods.go:89] "kube-proxy-bmrhh" [a1f01d6d-fff3-436b-bbae-7301b4e82f06] Running I0221 09:17:59.083346 481686 system_pods.go:89] "metrics-server-5b7b789f-dghgt" [ba56eb03-4694-4263-afb9-766698a51a11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server]) I0221 09:17:59.083351 481686 system_pods.go:89] "storage-provisioner" [7ca3bdcb-62ac-48f8-98f6-688c160c889a] Running I0221 09:17:59.083368 481686 retry.go:31] will retry after 544.138839ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler I0221 09:17:59.631168 481686 system_pods.go:86] 4 kube-system pods found I0221 09:17:59.631199 481686 system_pods.go:89] "coredns-5644d7b6d9-b7jrq" [adbb6ee0-5428-4641-a9ce-e84924c25533] Running I0221 09:17:59.631206 481686 system_pods.go:89] "kube-proxy-bmrhh" [a1f01d6d-fff3-436b-bbae-7301b4e82f06] Running I0221 09:17:59.631215 481686 system_pods.go:89] "metrics-server-5b7b789f-dghgt" [ba56eb03-4694-4263-afb9-766698a51a11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server]) I0221 09:17:59.631221 481686 system_pods.go:89] "storage-provisioner" [7ca3bdcb-62ac-48f8-98f6-688c160c889a] Running I0221 09:17:59.631237 481686 retry.go:31] will retry after 684.014074ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler I0221 09:18:00.319146 481686 system_pods.go:86] 4 kube-system pods found I0221 09:18:00.319175 481686 system_pods.go:89] "coredns-5644d7b6d9-b7jrq" [adbb6ee0-5428-4641-a9ce-e84924c25533] Running I0221 09:18:00.319180 481686 system_pods.go:89] "kube-proxy-bmrhh" [a1f01d6d-fff3-436b-bbae-7301b4e82f06] Running I0221 09:18:00.319188 481686 system_pods.go:89] "metrics-server-5b7b789f-dghgt" [ba56eb03-4694-4263-afb9-766698a51a11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server]) I0221 09:18:00.319192 481686 system_pods.go:89] "storage-provisioner" [7ca3bdcb-62ac-48f8-98f6-688c160c889a] Running I0221 09:18:00.319207 481686 retry.go:31] will retry after 1.039068543s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler I0221 09:18:01.362111 481686 system_pods.go:86] 4 kube-system pods found I0221 09:18:01.362142 481686 system_pods.go:89] "coredns-5644d7b6d9-b7jrq" [adbb6ee0-5428-4641-a9ce-e84924c25533] Running I0221 09:18:01.362149 481686 system_pods.go:89] "kube-proxy-bmrhh" [a1f01d6d-fff3-436b-bbae-7301b4e82f06] Running I0221 09:18:01.362158 481686 system_pods.go:89] "metrics-server-5b7b789f-dghgt" [ba56eb03-4694-4263-afb9-766698a51a11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server]) I0221 09:18:01.362164 481686 system_pods.go:89] "storage-provisioner" [7ca3bdcb-62ac-48f8-98f6-688c160c889a] Running I0221 09:18:01.362181 481686 retry.go:31] will retry after 1.02433744s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler I0221 09:18:02.390274 481686 system_pods.go:86] 4 kube-system pods found I0221 09:18:02.390307 481686 system_pods.go:89] "coredns-5644d7b6d9-b7jrq" [adbb6ee0-5428-4641-a9ce-e84924c25533] Running I0221 09:18:02.390312 481686 system_pods.go:89] "kube-proxy-bmrhh" [a1f01d6d-fff3-436b-bbae-7301b4e82f06] Running I0221 09:18:02.390319 481686 system_pods.go:89] "metrics-server-5b7b789f-dghgt" [ba56eb03-4694-4263-afb9-766698a51a11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server]) I0221 09:18:02.390324 481686 system_pods.go:89] "storage-provisioner" [7ca3bdcb-62ac-48f8-98f6-688c160c889a] Running I0221 09:18:02.390338 481686 retry.go:31] will retry after 1.268973106s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler I0221 09:17:59.237324 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:18:01.737286 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:17:59.417757 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:18:01.918299 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:18:03.664086 481686 system_pods.go:86] 4 kube-system pods found I0221 09:18:03.664125 481686 system_pods.go:89] "coredns-5644d7b6d9-b7jrq" [adbb6ee0-5428-4641-a9ce-e84924c25533] Running I0221 09:18:03.664135 481686 system_pods.go:89] "kube-proxy-bmrhh" [a1f01d6d-fff3-436b-bbae-7301b4e82f06] Running I0221 09:18:03.664149 481686 system_pods.go:89] "metrics-server-5b7b789f-dghgt" [ba56eb03-4694-4263-afb9-766698a51a11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server]) I0221 09:18:03.664160 481686 system_pods.go:89] "storage-provisioner" [7ca3bdcb-62ac-48f8-98f6-688c160c889a] Running I0221 09:18:03.664186 481686 retry.go:31] will retry after 1.733071555s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler I0221 09:18:05.400816 481686 system_pods.go:86] 4 kube-system pods found I0221 09:18:05.400845 481686 system_pods.go:89] "coredns-5644d7b6d9-b7jrq" [adbb6ee0-5428-4641-a9ce-e84924c25533] Running I0221 09:18:05.400850 481686 system_pods.go:89] "kube-proxy-bmrhh" [a1f01d6d-fff3-436b-bbae-7301b4e82f06] Running I0221 09:18:05.400858 481686 system_pods.go:89] "metrics-server-5b7b789f-dghgt" [ba56eb03-4694-4263-afb9-766698a51a11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server]) I0221 09:18:05.400862 481686 system_pods.go:89] "storage-provisioner" [7ca3bdcb-62ac-48f8-98f6-688c160c889a] Running I0221 09:18:05.400878 481686 retry.go:31] will retry after 2.410580953s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler I0221 09:18:07.815378 481686 system_pods.go:86] 4 kube-system pods found I0221 09:18:07.815408 481686 system_pods.go:89] "coredns-5644d7b6d9-b7jrq" [adbb6ee0-5428-4641-a9ce-e84924c25533] Running I0221 09:18:07.815417 481686 system_pods.go:89] "kube-proxy-bmrhh" [a1f01d6d-fff3-436b-bbae-7301b4e82f06] Running I0221 09:18:07.815426 481686 system_pods.go:89] "metrics-server-5b7b789f-dghgt" [ba56eb03-4694-4263-afb9-766698a51a11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server]) I0221 09:18:07.815432 481686 system_pods.go:89] "storage-provisioner" [7ca3bdcb-62ac-48f8-98f6-688c160c889a] Running I0221 09:18:07.815450 481686 retry.go:31] will retry after 3.437877504s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler I0221 09:18:04.237718 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:18:06.737668 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:18:04.417897 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:18:06.918123 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:18:11.259836 481686 system_pods.go:86] 4 kube-system pods found I0221 09:18:11.259863 481686 system_pods.go:89] "coredns-5644d7b6d9-b7jrq" [adbb6ee0-5428-4641-a9ce-e84924c25533] Running I0221 09:18:11.259871 481686 system_pods.go:89] "kube-proxy-bmrhh" [a1f01d6d-fff3-436b-bbae-7301b4e82f06] Running I0221 09:18:11.259877 481686 system_pods.go:89] "metrics-server-5b7b789f-dghgt" [ba56eb03-4694-4263-afb9-766698a51a11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server]) I0221 09:18:11.259882 481686 system_pods.go:89] "storage-provisioner" [7ca3bdcb-62ac-48f8-98f6-688c160c889a] Running I0221 09:18:11.259897 481686 retry.go:31] will retry after 3.261655801s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler I0221 09:18:09.238310 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:18:11.737798 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:18:09.417840 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:18:11.418215 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:18:13.418290 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:18:14.525258 481686 system_pods.go:86] 4 kube-system pods found I0221 09:18:14.525285 481686 system_pods.go:89] "coredns-5644d7b6d9-b7jrq" [adbb6ee0-5428-4641-a9ce-e84924c25533] Running I0221 09:18:14.525290 481686 system_pods.go:89] "kube-proxy-bmrhh" [a1f01d6d-fff3-436b-bbae-7301b4e82f06] Running I0221 09:18:14.525298 481686 system_pods.go:89] "metrics-server-5b7b789f-dghgt" [ba56eb03-4694-4263-afb9-766698a51a11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server]) I0221 09:18:14.525307 481686 system_pods.go:89] "storage-provisioner" [7ca3bdcb-62ac-48f8-98f6-688c160c889a] Running I0221 09:18:14.525326 481686 retry.go:31] will retry after 4.086092664s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler I0221 09:18:14.237001 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:18:16.237084 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:18:18.737356 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:18:15.917952 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:18:17.918144 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:18:18.614985 481686 system_pods.go:86] 4 kube-system pods found I0221 09:18:18.615062 481686 system_pods.go:89] "coredns-5644d7b6d9-b7jrq" [adbb6ee0-5428-4641-a9ce-e84924c25533] Running I0221 09:18:18.615070 481686 system_pods.go:89] "kube-proxy-bmrhh" [a1f01d6d-fff3-436b-bbae-7301b4e82f06] Running I0221 09:18:18.615080 481686 system_pods.go:89] "metrics-server-5b7b789f-dghgt" [ba56eb03-4694-4263-afb9-766698a51a11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server]) I0221 09:18:18.615088 481686 system_pods.go:89] "storage-provisioner" [7ca3bdcb-62ac-48f8-98f6-688c160c889a] Running I0221 09:18:18.615108 481686 retry.go:31] will retry after 6.402197611s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler I0221 09:18:21.237651 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:18:23.737263 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:18:19.918512 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:18:22.417587 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:18:25.021377 481686 system_pods.go:86] 4 kube-system pods found I0221 09:18:25.021411 481686 system_pods.go:89] "coredns-5644d7b6d9-b7jrq" [adbb6ee0-5428-4641-a9ce-e84924c25533] Running I0221 09:18:25.021425 481686 system_pods.go:89] "kube-proxy-bmrhh" [a1f01d6d-fff3-436b-bbae-7301b4e82f06] Running I0221 09:18:25.021439 481686 system_pods.go:89] "metrics-server-5b7b789f-dghgt" [ba56eb03-4694-4263-afb9-766698a51a11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server]) I0221 09:18:25.021446 481686 system_pods.go:89] "storage-provisioner" [7ca3bdcb-62ac-48f8-98f6-688c160c889a] Running I0221 09:18:25.021473 481686 retry.go:31] will retry after 6.062999549s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler I0221 09:18:25.738096 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:18:28.237653 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:18:24.418437 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:18:26.918202 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:18:31.090428 481686 system_pods.go:86] 4 kube-system pods found I0221 09:18:31.090463 481686 system_pods.go:89] "coredns-5644d7b6d9-b7jrq" [adbb6ee0-5428-4641-a9ce-e84924c25533] Running I0221 09:18:31.090470 481686 system_pods.go:89] "kube-proxy-bmrhh" [a1f01d6d-fff3-436b-bbae-7301b4e82f06] Running I0221 09:18:31.090485 481686 system_pods.go:89] "metrics-server-5b7b789f-dghgt" [ba56eb03-4694-4263-afb9-766698a51a11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server]) I0221 09:18:31.090494 481686 system_pods.go:89] "storage-provisioner" [7ca3bdcb-62ac-48f8-98f6-688c160c889a] Running I0221 09:18:31.090512 481686 retry.go:31] will retry after 10.504197539s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler I0221 09:18:30.737781 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:18:33.237355 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:18:29.417994 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:18:31.418203 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:18:33.919742 497077 pod_ready.go:102] pod "coredns-64897985d-t6lcp" in "kube-system" namespace has status "Ready":"False" I0221 09:18:35.237575 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" I0221 09:18:37.237773 495766 pod_ready.go:102] pod "coredns-64897985d-rcbll" in "kube-system" namespace has status "Ready":"False" * * ==> Docker <== * -- Logs begin at Mon 2022-02-21 09:07:35 UTC, end at Mon 2022-02-21 09:18:39 UTC. -- Feb 21 09:07:37 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:07:37.592500199Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 21 09:07:37 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:07:37.592523907Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 21 09:07:37 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:07:37.592538696Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Feb 21 09:07:37 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:07:37.592546949Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 21 09:07:37 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:07:37.598167477Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Feb 21 09:07:37 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:07:37.603475567Z" level=warning msg="Your kernel does not support CPU realtime scheduler" Feb 21 09:07:37 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:07:37.603503353Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 21 09:07:37 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:07:37.603508973Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 21 09:07:37 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:07:37.603672025Z" level=info msg="Loading containers: start." Feb 21 09:07:37 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:07:37.688849439Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 21 09:07:37 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:07:37.724291378Z" level=info msg="Loading containers: done." Feb 21 09:07:37 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:07:37.736718437Z" level=info msg="Docker daemon" commit=459d0df graphdriver(s)=overlay2 version=20.10.12 Feb 21 09:07:37 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:07:37.736793937Z" level=info msg="Daemon has completed initialization" Feb 21 09:07:37 kubenet-20220221084933-6550 systemd[1]: Started Docker Application Container Engine. Feb 21 09:07:37 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:07:37.755443583Z" level=info msg="API listen on [::]:2376" Feb 21 09:07:37 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:07:37.759148963Z" level=info msg="API listen on /var/run/docker.sock" Feb 21 09:08:15 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:08:15.430229963Z" level=info msg="ignoring event" container=b4d0b09fc93c25117ea61667b96317884a15c03f4858f4c45bd1e396cd363514 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:08:15 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:08:15.536638943Z" level=info msg="ignoring event" container=eea520917917d4a2be1b0666a121a5d7f45c3d95ac7905327d287d61d815b40e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:08:36 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:08:36.646212875Z" level=info msg="ignoring event" container=d514dd85625fd8c62e58361e26a2c9be6fe300c8b78e3c122e232d1992f24b85 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:09:07 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:09:07.650166793Z" level=info msg="ignoring event" container=a97ee22eacba2dca5bca29703201332d8a9269c81b84c9092251a87ec610e248 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:09:51 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:09:51.587076436Z" level=info msg="ignoring event" container=4bd886a067c518fe30582a3f0670f0a8bf70b070f8181c72f4670434a1b33a60 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:10:49 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:10:49.565679742Z" level=info msg="ignoring event" container=2965c9e60bc0a6c53488139147a9b08a5f0f7d8df4f737f43d7166d1649d012f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:12:10 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:12:10.565718850Z" level=info msg="ignoring event" container=b01b78e17698eaa90b27a2bcb80acab11164a3322c3ba3f8c2b1435e48a1eb8d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:14:11 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:14:11.592208704Z" level=info msg="ignoring event" container=17c37bb932ac684b0cf429ef76c908b6c5e92f52e76facaaba13b647bc4fba44 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 21 09:17:34 kubenet-20220221084933-6550 dockerd[458]: time="2022-02-21T09:17:34.562442082Z" level=info msg="ignoring event" container=0ff3389b32145fb4ede370416350a585e2c9a8db98aa3d37c88bef7fc9e534c9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 0ff3389b32145 6e38f40d628db About a minute ago Exited storage-provisioner 6 916953401c890 08b974dc9b1e2 k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 6 minutes ago Running dnsutils 0 09669f070f8f2 137bf98e19abd a4ca41631cc7a 10 minutes ago Running coredns 0 a70871c657779 4349ba0d8abd8 2114245ec4d6b 10 minutes ago Running kube-proxy 0 dacf1dd44398f a74500fb26ddd 25f8c7f3da61c 10 minutes ago Running etcd 0 1238581788c25 2162c71d2bacc 62930710c9634 10 minutes ago Running kube-apiserver 0 7dfd27d72f637 dceb444a0ede6 25444908517a5 10 minutes ago Running kube-controller-manager 0 0bda65172cb04 390268e8d3874 aceacb6244f9f 10 minutes ago Running kube-scheduler 0 9dcd04836497d * * ==> coredns [137bf98e19ab] <== * [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes" * * ==> describe nodes <== * Name: kubenet-20220221084933-6550 Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=kubenet-20220221084933-6550 kubernetes.io/os=linux minikube.k8s.io/commit=9a4c39b14752448ee8b5da7f8bb397c2a16d9ea9 minikube.k8s.io/name=kubenet-20220221084933-6550 minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2022_02_21T09_07_51_0700 minikube.k8s.io/version=v1.25.1 node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 21 Feb 2022 09:07:47 +0000 Taints: Unschedulable: false Lease: HolderIdentity: kubenet-20220221084933-6550 AcquireTime: RenewTime: Mon, 21 Feb 2022 09:18:34 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Mon, 21 Feb 2022 09:17:33 +0000 Mon, 21 Feb 2022 09:07:45 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Mon, 21 Feb 2022 09:17:33 +0000 Mon, 21 Feb 2022 09:07:45 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Mon, 21 Feb 2022 09:17:33 +0000 Mon, 21 Feb 2022 09:07:45 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Mon, 21 Feb 2022 09:17:33 +0000 Mon, 21 Feb 2022 09:08:01 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.76.2 Hostname: kubenet-20220221084933-6550 Capacity: cpu: 8 ephemeral-storage: 304695084Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32874648Ki pods: 110 Allocatable: cpu: 8 ephemeral-storage: 304695084Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32874648Ki pods: 110 System Info: Machine ID: b6a262faae404a5db719705fd34b5c8b System UUID: 0fc3953c-2ccc-4688-916f-cad0f4a89c0d Boot ID: 36f9c729-2a96-4807-bb74-314dc2113999 Kernel Version: 5.11.0-1029-gcp OS Image: Ubuntu 20.04.2 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.12 Kubelet Version: v1.23.4 Kube-Proxy Version: v1.23.4 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (8 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- default netcat-668db85669-4md9w 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m23s kube-system coredns-64897985d-cx6k8 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 10m kube-system etcd-kubenet-20220221084933-6550 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 10m kube-system kube-apiserver-kubenet-20220221084933-6550 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 10m kube-system kube-controller-manager-kubenet-20220221084933-6550 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 10m kube-system kube-proxy-npgzw 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 10m kube-system kube-scheduler-kubenet-20220221084933-6550 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 10m kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 10m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 750m (9%!)(MISSING) 0 (0%!)(MISSING) memory 170Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 10m kube-proxy Normal NodeHasSufficientMemory 10m kubelet Node kubenet-20220221084933-6550 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 10m kubelet Node kubenet-20220221084933-6550 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 10m kubelet Node kubenet-20220221084933-6550 status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 10m kubelet Updated Node Allocatable limit across pods Normal Starting 10m kubelet Starting kubelet. Normal NodeReady 10m kubelet Node kubenet-20220221084933-6550 status is now: NodeReady * * ==> dmesg <== * [ +0.000008] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 [ +2.963841] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 [ +1.035853] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 [ +1.023933] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 36 cf db 3f 18 08 06 [Feb21 09:14] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +1.035516] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +1.019972] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +2.943777] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000008] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +1.027861] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +1.019959] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +2.951870] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +1.015815] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 [ +1.027946] IPv4: martian source 10.244.0.223 from 10.244.0.4, on dev eth0 [ +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 ed b5 9f d1 d5 08 06 * * ==> etcd [a74500fb26dd] <== * {"level":"info","ts":"2022-02-21T09:07:45.521Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T09:07:45.521Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T09:07:45.521Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"} {"level":"info","ts":"2022-02-21T09:07:45.521Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubenet-20220221084933-6550 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"} {"level":"info","ts":"2022-02-21T09:07:45.521Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2022-02-21T09:07:45.521Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2022-02-21T09:07:45.522Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"} {"level":"info","ts":"2022-02-21T09:07:45.522Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"} {"level":"info","ts":"2022-02-21T09:07:45.523Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"} {"level":"info","ts":"2022-02-21T09:07:45.523Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"} {"level":"info","ts":"2022-02-21T09:09:53.822Z","caller":"traceutil/trace.go:171","msg":"trace[491778915] linearizableReadLoop","detail":"{readStateIndex:559; appliedIndex:559; }","duration":"379.022828ms","start":"2022-02-21T09:09:53.443Z","end":"2022-02-21T09:09:53.822Z","steps":["trace[491778915] 'read index received' (duration: 379.013826ms)","trace[491778915] 'applied index is now lower than readState.Index' (duration: 7.979µs)"],"step_count":2} {"level":"warn","ts":"2022-02-21T09:09:53.823Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"379.166345ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csistoragecapacities/\" range_end:\"/registry/csistoragecapacities0\" count_only:true ","response":"range_response_count:0 size:5"} {"level":"warn","ts":"2022-02-21T09:09:53.823Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"310.062402ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" count_only:true ","response":"range_response_count:0 size:7"} {"level":"info","ts":"2022-02-21T09:09:53.823Z","caller":"traceutil/trace.go:171","msg":"trace[1332575891] range","detail":"{range_begin:/registry/csistoragecapacities/; range_end:/registry/csistoragecapacities0; response_count:0; response_revision:522; }","duration":"379.282463ms","start":"2022-02-21T09:09:53.443Z","end":"2022-02-21T09:09:53.823Z","steps":["trace[1332575891] 'agreement among raft nodes before linearized reading' (duration: 379.128871ms)"],"step_count":1} {"level":"info","ts":"2022-02-21T09:09:53.823Z","caller":"traceutil/trace.go:171","msg":"trace[995360368] range","detail":"{range_begin:/registry/controllerrevisions/; range_end:/registry/controllerrevisions0; response_count:0; response_revision:522; }","duration":"310.090407ms","start":"2022-02-21T09:09:53.513Z","end":"2022-02-21T09:09:53.823Z","steps":["trace[995360368] 'agreement among raft nodes before linearized reading' (duration: 310.042415ms)"],"step_count":1} {"level":"warn","ts":"2022-02-21T09:09:53.823Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-02-21T09:09:53.443Z","time spent":"379.334172ms","remote":"127.0.0.1:40772","response type":"/etcdserverpb.KV/Range","request count":0,"request size":68,"response count":0,"response size":28,"request content":"key:\"/registry/csistoragecapacities/\" range_end:\"/registry/csistoragecapacities0\" count_only:true "} {"level":"warn","ts":"2022-02-21T09:09:53.823Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"295.262502ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-64897985d-cx6k8\" ","response":"range_response_count:1 size:4636"} {"level":"warn","ts":"2022-02-21T09:09:53.823Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-02-21T09:09:53.513Z","time spent":"310.195138ms","remote":"127.0.0.1:40808","response type":"/etcdserverpb.KV/Range","request count":0,"request size":66,"response count":1,"response size":30,"request content":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" count_only:true "} {"level":"info","ts":"2022-02-21T09:09:53.823Z","caller":"traceutil/trace.go:171","msg":"trace[970577559] range","detail":"{range_begin:/registry/pods/kube-system/coredns-64897985d-cx6k8; range_end:; response_count:1; response_revision:522; }","duration":"295.3246ms","start":"2022-02-21T09:09:53.527Z","end":"2022-02-21T09:09:53.823Z","steps":["trace[970577559] 'agreement among raft nodes before linearized reading' (duration: 295.211653ms)"],"step_count":1} {"level":"warn","ts":"2022-02-21T09:09:53.823Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"175.25773ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" count_only:true ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-02-21T09:09:53.823Z","caller":"traceutil/trace.go:171","msg":"trace[1487035562] range","detail":"{range_begin:/registry/poddisruptionbudgets/; range_end:/registry/poddisruptionbudgets0; response_count:0; response_revision:522; }","duration":"175.40853ms","start":"2022-02-21T09:09:53.647Z","end":"2022-02-21T09:09:53.823Z","steps":["trace[1487035562] 'agreement among raft nodes before linearized reading' (duration: 175.220445ms)"],"step_count":1} {"level":"warn","ts":"2022-02-21T09:09:54.207Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"145.896975ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-02-21T09:09:54.207Z","caller":"traceutil/trace.go:171","msg":"trace[197536161] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/; range_end:/registry/mutatingwebhookconfigurations0; response_count:0; response_revision:523; }","duration":"145.983587ms","start":"2022-02-21T09:09:54.061Z","end":"2022-02-21T09:09:54.207Z","steps":["trace[197536161] 'agreement among raft nodes before linearized reading' (duration: 89.066028ms)","trace[197536161] 'count revisions from in-memory index tree' (duration: 56.806429ms)"],"step_count":2} {"level":"info","ts":"2022-02-21T09:17:45.538Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":617} {"level":"info","ts":"2022-02-21T09:17:45.539Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":617,"took":"727.44µs"} * * ==> kernel <== * 09:18:40 up 1:01, 0 users, load average: 0.80, 1.43, 2.17 Linux kubenet-20220221084933-6550 5.11.0-1029-gcp #33~20.04.3-Ubuntu SMP Tue Jan 18 12:03:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.2 LTS" * * ==> kube-apiserver [2162c71d2bac] <== * I0221 09:07:47.811713 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0221 09:07:47.820829 1 shared_informer.go:247] Caches are synced for node_authorizer I0221 09:07:47.823835 1 cache.go:39] Caches are synced for autoregister controller I0221 09:07:47.824219 1 cache.go:39] Caches are synced for AvailableConditionController controller I0221 09:07:47.824832 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller I0221 09:07:48.707923 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0221 09:07:48.715124 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0221 09:07:48.727648 1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000 I0221 09:07:48.732249 1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000 I0221 09:07:48.732268 1 storage_scheduling.go:109] all system priority classes are created successfully or already exist. I0221 09:07:49.169828 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0221 09:07:49.212301 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0221 09:07:49.319405 1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1] W0221 09:07:49.325071 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2] I0221 09:07:49.326070 1 controller.go:611] quota admission added evaluator for: endpoints I0221 09:07:49.329548 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io I0221 09:07:49.836734 1 controller.go:611] quota admission added evaluator for: serviceaccounts I0221 09:07:50.931222 1 controller.go:611] quota admission added evaluator for: deployments.apps I0221 09:07:50.941473 1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10] I0221 09:07:50.957175 1 controller.go:611] quota admission added evaluator for: daemonsets.apps I0221 09:07:51.218176 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io I0221 09:08:03.652956 1 controller.go:611] quota admission added evaluator for: replicasets.apps I0221 09:08:03.767118 1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps I0221 09:08:04.943484 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io I0221 09:12:16.210623 1 alloc.go:329] "allocated clusterIPs" service="default/netcat" clusterIPs=map[IPv4:10.108.177.0] * * ==> kube-controller-manager [dceb444a0ede] <== * I0221 09:08:03.001017 1 shared_informer.go:247] Caches are synced for expand I0221 09:08:03.002227 1 shared_informer.go:247] Caches are synced for ephemeral I0221 09:08:03.009218 1 shared_informer.go:247] Caches are synced for ReplicaSet I0221 09:08:03.012397 1 shared_informer.go:247] Caches are synced for node I0221 09:08:03.012453 1 range_allocator.go:173] Starting range CIDR allocator I0221 09:08:03.012460 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator I0221 09:08:03.012472 1 shared_informer.go:247] Caches are synced for cidrallocator I0221 09:08:03.017357 1 range_allocator.go:374] Set node kubenet-20220221084933-6550 PodCIDR to [10.244.0.0/24] I0221 09:08:03.050538 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring I0221 09:08:03.050540 1 shared_informer.go:247] Caches are synced for endpoint I0221 09:08:03.077232 1 shared_informer.go:247] Caches are synced for bootstrap_signer I0221 09:08:03.101353 1 shared_informer.go:247] Caches are synced for crt configmap I0221 09:08:03.158146 1 shared_informer.go:247] Caches are synced for resource quota I0221 09:08:03.204485 1 shared_informer.go:247] Caches are synced for resource quota I0221 09:08:03.619721 1 shared_informer.go:247] Caches are synced for garbage collector I0221 09:08:03.655018 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2" I0221 09:08:03.661327 1 shared_informer.go:247] Caches are synced for garbage collector I0221 09:08:03.661351 1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0221 09:08:03.750524 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1" I0221 09:08:03.810865 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-npgzw" I0221 09:08:04.005746 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-nt6xl" I0221 09:08:04.013863 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-cx6k8" I0221 09:08:04.029545 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-nt6xl" I0221 09:12:16.225556 1 event.go:294] "Event occurred" object="default/netcat" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set netcat-668db85669 to 1" I0221 09:12:16.231770 1 event.go:294] "Event occurred" object="default/netcat-668db85669" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: netcat-668db85669-4md9w" * * ==> kube-proxy [4349ba0d8abd] <== * I0221 09:08:04.903945 1 node.go:163] Successfully retrieved node IP: 192.168.76.2 I0221 09:08:04.904021 1 server_others.go:138] "Detected node IP" address="192.168.76.2" I0221 09:08:04.904060 1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode="" I0221 09:08:04.931851 1 server_others.go:206] "Using iptables Proxier" I0221 09:08:04.932139 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4 I0221 09:08:04.932159 1 server_others.go:214] "Creating dualStackProxier for iptables" I0221 09:08:04.932187 1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6" I0221 09:08:04.939605 1 server.go:656] "Version info" version="v1.23.4" I0221 09:08:04.940800 1 config.go:317] "Starting service config controller" I0221 09:08:04.940850 1 shared_informer.go:240] Waiting for caches to sync for service config I0221 09:08:04.940940 1 config.go:226] "Starting endpoint slice config controller" I0221 09:08:04.940960 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I0221 09:08:05.041845 1 shared_informer.go:247] Caches are synced for service config I0221 09:08:05.041861 1 shared_informer.go:247] Caches are synced for endpoint slice config * * ==> kube-scheduler [390268e8d387] <== * W0221 09:07:47.813936 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0221 09:07:47.814643 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0221 09:07:47.813941 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0221 09:07:47.814672 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0221 09:07:47.814763 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0221 09:07:47.814703 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0221 09:07:48.639695 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0221 09:07:48.639756 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0221 09:07:48.646952 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0221 09:07:48.646990 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0221 09:07:48.736565 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0221 09:07:48.736597 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0221 09:07:48.746758 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0221 09:07:48.746785 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0221 09:07:48.849593 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0221 09:07:48.849635 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0221 09:07:48.859685 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0221 09:07:48.859728 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0221 09:07:48.895897 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0221 09:07:48.895926 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0221 09:07:48.911402 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0221 09:07:48.911440 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0221 09:07:48.918562 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0221 09:07:48.918603 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope I0221 09:07:49.309140 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Mon 2022-02-21 09:07:35 UTC, end at Mon 2022-02-21 09:18:40 UTC. -- Feb 21 09:15:29 kubenet-20220221084933-6550 kubelet[1957]: E0221 09:15:29.427872 1957 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8b81b99b-2ad8-4e82-9589-bb482d30d8b7)\"" pod="kube-system/storage-provisioner" podUID=8b81b99b-2ad8-4e82-9589-bb482d30d8b7 Feb 21 09:15:44 kubenet-20220221084933-6550 kubelet[1957]: I0221 09:15:44.427558 1957 scope.go:110] "RemoveContainer" containerID="17c37bb932ac684b0cf429ef76c908b6c5e92f52e76facaaba13b647bc4fba44" Feb 21 09:15:44 kubenet-20220221084933-6550 kubelet[1957]: E0221 09:15:44.427772 1957 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8b81b99b-2ad8-4e82-9589-bb482d30d8b7)\"" pod="kube-system/storage-provisioner" podUID=8b81b99b-2ad8-4e82-9589-bb482d30d8b7 Feb 21 09:15:58 kubenet-20220221084933-6550 kubelet[1957]: I0221 09:15:58.427716 1957 scope.go:110] "RemoveContainer" containerID="17c37bb932ac684b0cf429ef76c908b6c5e92f52e76facaaba13b647bc4fba44" Feb 21 09:15:58 kubenet-20220221084933-6550 kubelet[1957]: E0221 09:15:58.427951 1957 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8b81b99b-2ad8-4e82-9589-bb482d30d8b7)\"" pod="kube-system/storage-provisioner" podUID=8b81b99b-2ad8-4e82-9589-bb482d30d8b7 Feb 21 09:16:11 kubenet-20220221084933-6550 kubelet[1957]: I0221 09:16:11.427570 1957 scope.go:110] "RemoveContainer" containerID="17c37bb932ac684b0cf429ef76c908b6c5e92f52e76facaaba13b647bc4fba44" Feb 21 09:16:11 kubenet-20220221084933-6550 kubelet[1957]: E0221 09:16:11.427867 1957 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8b81b99b-2ad8-4e82-9589-bb482d30d8b7)\"" pod="kube-system/storage-provisioner" podUID=8b81b99b-2ad8-4e82-9589-bb482d30d8b7 Feb 21 09:16:26 kubenet-20220221084933-6550 kubelet[1957]: I0221 09:16:26.427859 1957 scope.go:110] "RemoveContainer" containerID="17c37bb932ac684b0cf429ef76c908b6c5e92f52e76facaaba13b647bc4fba44" Feb 21 09:16:26 kubenet-20220221084933-6550 kubelet[1957]: E0221 09:16:26.428166 1957 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8b81b99b-2ad8-4e82-9589-bb482d30d8b7)\"" pod="kube-system/storage-provisioner" podUID=8b81b99b-2ad8-4e82-9589-bb482d30d8b7 Feb 21 09:16:37 kubenet-20220221084933-6550 kubelet[1957]: I0221 09:16:37.427517 1957 scope.go:110] "RemoveContainer" containerID="17c37bb932ac684b0cf429ef76c908b6c5e92f52e76facaaba13b647bc4fba44" Feb 21 09:16:37 kubenet-20220221084933-6550 kubelet[1957]: E0221 09:16:37.427743 1957 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8b81b99b-2ad8-4e82-9589-bb482d30d8b7)\"" pod="kube-system/storage-provisioner" podUID=8b81b99b-2ad8-4e82-9589-bb482d30d8b7 Feb 21 09:16:51 kubenet-20220221084933-6550 kubelet[1957]: I0221 09:16:51.428160 1957 scope.go:110] "RemoveContainer" containerID="17c37bb932ac684b0cf429ef76c908b6c5e92f52e76facaaba13b647bc4fba44" Feb 21 09:16:51 kubenet-20220221084933-6550 kubelet[1957]: E0221 09:16:51.428359 1957 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8b81b99b-2ad8-4e82-9589-bb482d30d8b7)\"" pod="kube-system/storage-provisioner" podUID=8b81b99b-2ad8-4e82-9589-bb482d30d8b7 Feb 21 09:17:04 kubenet-20220221084933-6550 kubelet[1957]: I0221 09:17:04.427362 1957 scope.go:110] "RemoveContainer" containerID="17c37bb932ac684b0cf429ef76c908b6c5e92f52e76facaaba13b647bc4fba44" Feb 21 09:17:35 kubenet-20220221084933-6550 kubelet[1957]: I0221 09:17:35.327086 1957 scope.go:110] "RemoveContainer" containerID="17c37bb932ac684b0cf429ef76c908b6c5e92f52e76facaaba13b647bc4fba44" Feb 21 09:17:35 kubenet-20220221084933-6550 kubelet[1957]: I0221 09:17:35.327384 1957 scope.go:110] "RemoveContainer" containerID="0ff3389b32145fb4ede370416350a585e2c9a8db98aa3d37c88bef7fc9e534c9" Feb 21 09:17:35 kubenet-20220221084933-6550 kubelet[1957]: E0221 09:17:35.327634 1957 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8b81b99b-2ad8-4e82-9589-bb482d30d8b7)\"" pod="kube-system/storage-provisioner" podUID=8b81b99b-2ad8-4e82-9589-bb482d30d8b7 Feb 21 09:17:50 kubenet-20220221084933-6550 kubelet[1957]: I0221 09:17:50.428054 1957 scope.go:110] "RemoveContainer" containerID="0ff3389b32145fb4ede370416350a585e2c9a8db98aa3d37c88bef7fc9e534c9" Feb 21 09:17:50 kubenet-20220221084933-6550 kubelet[1957]: E0221 09:17:50.428256 1957 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8b81b99b-2ad8-4e82-9589-bb482d30d8b7)\"" pod="kube-system/storage-provisioner" podUID=8b81b99b-2ad8-4e82-9589-bb482d30d8b7 Feb 21 09:18:04 kubenet-20220221084933-6550 kubelet[1957]: I0221 09:18:04.427285 1957 scope.go:110] "RemoveContainer" containerID="0ff3389b32145fb4ede370416350a585e2c9a8db98aa3d37c88bef7fc9e534c9" Feb 21 09:18:04 kubenet-20220221084933-6550 kubelet[1957]: E0221 09:18:04.427560 1957 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8b81b99b-2ad8-4e82-9589-bb482d30d8b7)\"" pod="kube-system/storage-provisioner" podUID=8b81b99b-2ad8-4e82-9589-bb482d30d8b7 Feb 21 09:18:16 kubenet-20220221084933-6550 kubelet[1957]: I0221 09:18:16.427885 1957 scope.go:110] "RemoveContainer" containerID="0ff3389b32145fb4ede370416350a585e2c9a8db98aa3d37c88bef7fc9e534c9" Feb 21 09:18:16 kubenet-20220221084933-6550 kubelet[1957]: E0221 09:18:16.428125 1957 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8b81b99b-2ad8-4e82-9589-bb482d30d8b7)\"" pod="kube-system/storage-provisioner" podUID=8b81b99b-2ad8-4e82-9589-bb482d30d8b7 Feb 21 09:18:31 kubenet-20220221084933-6550 kubelet[1957]: I0221 09:18:31.428209 1957 scope.go:110] "RemoveContainer" containerID="0ff3389b32145fb4ede370416350a585e2c9a8db98aa3d37c88bef7fc9e534c9" Feb 21 09:18:31 kubenet-20220221084933-6550 kubelet[1957]: E0221 09:18:31.428424 1957 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8b81b99b-2ad8-4e82-9589-bb482d30d8b7)\"" pod="kube-system/storage-provisioner" podUID=8b81b99b-2ad8-4e82-9589-bb482d30d8b7 * * ==> storage-provisioner [0ff3389b3214] <== * I0221 09:17:04.545461 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... F0221 09:17:34.547448 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout -- /stdout -- helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubenet-20220221084933-6550 -n kubenet-20220221084933-6550 helpers_test.go:262: (dbg) Run: kubectl --context kubenet-20220221084933-6550 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running helpers_test.go:271: non-running pods: helpers_test.go:273: ======> post-mortem[TestNetworkPlugins/group/kubenet]: describe non-running pods <====== helpers_test.go:276: (dbg) Run: kubectl --context kubenet-20220221084933-6550 describe pod helpers_test.go:276: (dbg) Non-zero exit: kubectl --context kubenet-20220221084933-6550 describe pod : exit status 1 (40.9887ms) ** stderr ** error: resource name may not be empty ** /stderr ** helpers_test.go:278: kubectl --context kubenet-20220221084933-6550 describe pod : exit status 1 helpers_test.go:176: Cleaning up "kubenet-20220221084933-6550" profile ... helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p kubenet-20220221084933-6550 E0221 09:18:42.084259 6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubenet-20220221084933-6550: (2.685573792s) --- FAIL: TestNetworkPlugins/group/kubenet (678.23s) === FAIL: . TestNetworkPlugins/group (0.24s) --- FAIL: TestNetworkPlugins/group (0.24s) === FAIL: . TestNetworkPlugins (1882.40s) DONE 345 tests, 20 skipped, 21 failures in 4119.188s