=== RUN TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run: out/minikube-linux-amd64 start -p old-k8s-version-330869 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=docker --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-330869 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=docker --kubernetes-version=v1.16.0: exit status 80 (9m39.788985919s)
-- stdout --
* [old-k8s-version-330869] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=17363
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/17363-491115/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-491115/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting control plane node old-k8s-version-330869 in cluster old-k8s-version-330869
* Pulling base image ...
* Creating docker container (CPUs=2, Memory=2200MB) ...
* Preparing Kubernetes v1.16.0 on Docker 24.0.6 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Verifying Kubernetes components...
* Enabled addons: storage-provisioner, default-storageclass
-- /stdout --
** stderr **
I1005 20:36:47.919687 848852 out.go:296] Setting OutFile to fd 1 ...
I1005 20:36:47.919964 848852 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 20:36:47.919973 848852 out.go:309] Setting ErrFile to fd 2...
I1005 20:36:47.919978 848852 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 20:36:47.920173 848852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-491115/.minikube/bin
I1005 20:36:47.920814 848852 out.go:303] Setting JSON to false
I1005 20:36:47.923556 848852 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8356,"bootTime":1696529852,"procs":904,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1005 20:36:47.923650 848852 start.go:138] virtualization: kvm guest
I1005 20:36:47.925906 848852 out.go:177] * [old-k8s-version-330869] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
I1005 20:36:47.927990 848852 out.go:177] - MINIKUBE_LOCATION=17363
I1005 20:36:47.929419 848852 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1005 20:36:47.928012 848852 notify.go:220] Checking for updates...
I1005 20:36:47.932014 848852 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/17363-491115/kubeconfig
I1005 20:36:47.933550 848852 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-491115/.minikube
I1005 20:36:47.934951 848852 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I1005 20:36:47.936287 848852 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I1005 20:36:47.938025 848852 config.go:182] Loaded profile config "bridge-264029": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1005 20:36:47.938137 848852 config.go:182] Loaded profile config "flannel-264029": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1005 20:36:47.938237 848852 config.go:182] Loaded profile config "kubenet-264029": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1005 20:36:47.938345 848852 driver.go:378] Setting default libvirt URI to qemu:///system
I1005 20:36:47.964665 848852 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
I1005 20:36:47.964798 848852 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1005 20:36:48.025439 848852 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:66 SystemTime:2023-10-05 20:36:48.015397239 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1005 20:36:48.025552 848852 docker.go:294] overlay module found
I1005 20:36:48.027633 848852 out.go:177] * Using the docker driver based on user configuration
I1005 20:36:48.028989 848852 start.go:298] selected driver: docker
I1005 20:36:48.029005 848852 start.go:902] validating driver "docker" against <nil>
I1005 20:36:48.029019 848852 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1005 20:36:48.030024 848852 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1005 20:36:48.085626 848852 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:66 SystemTime:2023-10-05 20:36:48.075813573 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1005 20:36:48.085813 848852 start_flags.go:307] no existing cluster config was found, will generate one from the flags
I1005 20:36:48.086021 848852 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1005 20:36:48.088133 848852 out.go:177] * Using Docker driver with root privileges
I1005 20:36:48.089876 848852 cni.go:84] Creating CNI manager for ""
I1005 20:36:48.089915 848852 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I1005 20:36:48.089929 848852 start_flags.go:321] config:
{Name:old-k8s-version-330869 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-330869 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
I1005 20:36:48.093801 848852 out.go:177] * Starting control plane node old-k8s-version-330869 in cluster old-k8s-version-330869
I1005 20:36:48.095264 848852 cache.go:122] Beginning downloading kic base image for docker with docker
I1005 20:36:48.096753 848852 out.go:177] * Pulling base image ...
I1005 20:36:48.098129 848852 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
I1005 20:36:48.098186 848852 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17363-491115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
I1005 20:36:48.098212 848852 cache.go:57] Caching tarball of preloaded images
I1005 20:36:48.098238 848852 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
I1005 20:36:48.098335 848852 preload.go:174] Found /home/jenkins/minikube-integration/17363-491115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1005 20:36:48.098350 848852 cache.go:60] Finished verifying existence of preloaded tar for v1.16.0 on docker
I1005 20:36:48.098477 848852 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/config.json ...
I1005 20:36:48.098504 848852 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/config.json: {Name:mk99752faf0bffc70eb01d982f9c37d9a054b90f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1005 20:36:48.116009 848852 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon, skipping pull
I1005 20:36:48.116035 848852 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in daemon, skipping load
I1005 20:36:48.116061 848852 cache.go:195] Successfully downloaded all kic artifacts
I1005 20:36:48.116099 848852 start.go:365] acquiring machines lock for old-k8s-version-330869: {Name:mk380d306e21968d92a9ebd5eb2e08ba9e79c051 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1005 20:36:48.116229 848852 start.go:369] acquired machines lock for "old-k8s-version-330869" in 94.435µs
I1005 20:36:48.116272 848852 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-330869 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-330869 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I1005 20:36:48.116379 848852 start.go:125] createHost starting for "" (driver="docker")
I1005 20:36:48.118693 848852 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
I1005 20:36:48.118976 848852 start.go:159] libmachine.API.Create for "old-k8s-version-330869" (driver="docker")
I1005 20:36:48.119020 848852 client.go:168] LocalClient.Create starting
I1005 20:36:48.119112 848852 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca.pem
I1005 20:36:48.119146 848852 main.go:141] libmachine: Decoding PEM data...
I1005 20:36:48.119164 848852 main.go:141] libmachine: Parsing certificate...
I1005 20:36:48.119213 848852 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17363-491115/.minikube/certs/cert.pem
I1005 20:36:48.119241 848852 main.go:141] libmachine: Decoding PEM data...
I1005 20:36:48.119252 848852 main.go:141] libmachine: Parsing certificate...
I1005 20:36:48.120099 848852 cli_runner.go:164] Run: docker network inspect old-k8s-version-330869 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1005 20:36:48.137596 848852 cli_runner.go:211] docker network inspect old-k8s-version-330869 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1005 20:36:48.137694 848852 network_create.go:281] running [docker network inspect old-k8s-version-330869] to gather additional debugging logs...
I1005 20:36:48.137717 848852 cli_runner.go:164] Run: docker network inspect old-k8s-version-330869
W1005 20:36:48.154952 848852 cli_runner.go:211] docker network inspect old-k8s-version-330869 returned with exit code 1
I1005 20:36:48.154989 848852 network_create.go:284] error running [docker network inspect old-k8s-version-330869]: docker network inspect old-k8s-version-330869: exit status 1
stdout:
[]
stderr:
Error response from daemon: network old-k8s-version-330869 not found
I1005 20:36:48.155024 848852 network_create.go:286] output of [docker network inspect old-k8s-version-330869]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network old-k8s-version-330869 not found
** /stderr **
I1005 20:36:48.155170 848852 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1005 20:36:48.173321 848852 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d49f16ce6477 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:89:e2:2f:34} reservation:<nil>}
I1005 20:36:48.174214 848852 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cd43b43b5fb6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:70:71:c4:9f} reservation:<nil>}
I1005 20:36:48.174897 848852 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ec7c14bb7816 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:05:92:77:e8} reservation:<nil>}
I1005 20:36:48.175632 848852 network.go:214] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c93df026c753 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:48:da:f3:d2} reservation:<nil>}
I1005 20:36:48.176528 848852 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000f16cc0}
I1005 20:36:48.176554 848852 network_create.go:124] attempt to create docker network old-k8s-version-330869 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
I1005 20:36:48.176618 848852 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-330869 old-k8s-version-330869
I1005 20:36:48.234912 848852 network_create.go:108] docker network old-k8s-version-330869 192.168.85.0/24 created
I1005 20:36:48.234958 848852 kic.go:117] calculated static IP "192.168.85.2" for the "old-k8s-version-330869" container
I1005 20:36:48.235048 848852 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1005 20:36:48.252099 848852 cli_runner.go:164] Run: docker volume create old-k8s-version-330869 --label name.minikube.sigs.k8s.io=old-k8s-version-330869 --label created_by.minikube.sigs.k8s.io=true
I1005 20:36:48.270352 848852 oci.go:103] Successfully created a docker volume old-k8s-version-330869
I1005 20:36:48.270463 848852 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-330869-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-330869 --entrypoint /usr/bin/test -v old-k8s-version-330869:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib
I1005 20:36:48.791875 848852 oci.go:107] Successfully prepared a docker volume old-k8s-version-330869
I1005 20:36:48.791930 848852 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
I1005 20:36:48.791970 848852 kic.go:190] Starting extracting preloaded images to volume ...
I1005 20:36:48.792070 848852 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17363-491115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-330869:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir
I1005 20:36:53.585436 848852 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17363-491115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-330869:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir: (4.793290555s)
I1005 20:36:53.585466 848852 kic.go:199] duration metric: took 4.793497 seconds to extract preloaded images to volume
W1005 20:36:53.585577 848852 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1005 20:36:53.585705 848852 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1005 20:36:53.688434 848852 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-330869 --name old-k8s-version-330869 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-330869 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-330869 --network old-k8s-version-330869 --ip 192.168.85.2 --volume old-k8s-version-330869:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae
I1005 20:36:54.092328 848852 cli_runner.go:164] Run: docker container inspect old-k8s-version-330869 --format={{.State.Running}}
I1005 20:36:54.115472 848852 cli_runner.go:164] Run: docker container inspect old-k8s-version-330869 --format={{.State.Status}}
I1005 20:36:54.146432 848852 cli_runner.go:164] Run: docker exec old-k8s-version-330869 stat /var/lib/dpkg/alternatives/iptables
I1005 20:36:54.222564 848852 oci.go:144] the created container "old-k8s-version-330869" has a running status.
I1005 20:36:54.222600 848852 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17363-491115/.minikube/machines/old-k8s-version-330869/id_rsa...
I1005 20:36:54.393461 848852 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17363-491115/.minikube/machines/old-k8s-version-330869/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1005 20:36:54.417158 848852 cli_runner.go:164] Run: docker container inspect old-k8s-version-330869 --format={{.State.Status}}
I1005 20:36:54.450025 848852 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1005 20:36:54.450055 848852 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-330869 chown docker:docker /home/docker/.ssh/authorized_keys]
I1005 20:36:54.538711 848852 cli_runner.go:164] Run: docker container inspect old-k8s-version-330869 --format={{.State.Status}}
I1005 20:36:54.562318 848852 machine.go:88] provisioning docker machine ...
I1005 20:36:54.562370 848852 ubuntu.go:169] provisioning hostname "old-k8s-version-330869"
I1005 20:36:54.562434 848852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330869
I1005 20:36:54.595826 848852 main.go:141] libmachine: Using SSH client type: native
I1005 20:36:54.596278 848852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil> [] 0s} 127.0.0.1 33383 <nil> <nil>}
I1005 20:36:54.596304 848852 main.go:141] libmachine: About to run SSH command:
sudo hostname old-k8s-version-330869 && echo "old-k8s-version-330869" | sudo tee /etc/hostname
I1005 20:36:54.597029 848852 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41130->127.0.0.1:33383: read: connection reset by peer
I1005 20:36:57.757089 848852 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-330869
I1005 20:36:57.757178 848852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330869
I1005 20:36:57.777649 848852 main.go:141] libmachine: Using SSH client type: native
I1005 20:36:57.777978 848852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil> [] 0s} 127.0.0.1 33383 <nil> <nil>}
I1005 20:36:57.778004 848852 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sold-k8s-version-330869' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-330869/g' /etc/hosts;
else
echo '127.0.1.1 old-k8s-version-330869' | sudo tee -a /etc/hosts;
fi
fi
I1005 20:36:57.913764 848852 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1005 20:36:57.913805 848852 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17363-491115/.minikube CaCertPath:/home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17363-491115/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17363-491115/.minikube}
I1005 20:36:57.913838 848852 ubuntu.go:177] setting up certificates
I1005 20:36:57.913858 848852 provision.go:83] configureAuth start
I1005 20:36:57.913935 848852 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-330869
I1005 20:36:57.934828 848852 provision.go:138] copyHostCerts
I1005 20:36:57.934885 848852 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-491115/.minikube/cert.pem, removing ...
I1005 20:36:57.934893 848852 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-491115/.minikube/cert.pem
I1005 20:36:57.934972 848852 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17363-491115/.minikube/cert.pem (1123 bytes)
I1005 20:36:57.935086 848852 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-491115/.minikube/key.pem, removing ...
I1005 20:36:57.935101 848852 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-491115/.minikube/key.pem
I1005 20:36:57.935139 848852 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17363-491115/.minikube/key.pem (1679 bytes)
I1005 20:36:57.935276 848852 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-491115/.minikube/ca.pem, removing ...
I1005 20:36:57.935288 848852 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-491115/.minikube/ca.pem
I1005 20:36:57.935324 848852 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17363-491115/.minikube/ca.pem (1082 bytes)
I1005 20:36:57.935419 848852 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17363-491115/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-330869 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-330869]
I1005 20:36:58.024520 848852 provision.go:172] copyRemoteCerts
I1005 20:36:58.024583 848852 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1005 20:36:58.024645 848852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330869
I1005 20:36:58.041908 848852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/old-k8s-version-330869/id_rsa Username:docker}
I1005 20:36:58.138407 848852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
I1005 20:36:58.163830 848852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1005 20:36:58.188988 848852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1005 20:36:58.214350 848852 provision.go:86] duration metric: configureAuth took 300.469026ms
I1005 20:36:58.214380 848852 ubuntu.go:193] setting minikube options for container-runtime
I1005 20:36:58.214555 848852 config.go:182] Loaded profile config "old-k8s-version-330869": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
I1005 20:36:58.214618 848852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330869
I1005 20:36:58.233958 848852 main.go:141] libmachine: Using SSH client type: native
I1005 20:36:58.234450 848852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil> [] 0s} 127.0.0.1 33383 <nil> <nil>}
I1005 20:36:58.234478 848852 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1005 20:36:58.373924 848852 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I1005 20:36:58.373955 848852 ubuntu.go:71] root file system type: overlay
I1005 20:36:58.374081 848852 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I1005 20:36:58.374161 848852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330869
I1005 20:36:58.391540 848852 main.go:141] libmachine: Using SSH client type: native
I1005 20:36:58.391915 848852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil> [] 0s} 127.0.0.1 33383 <nil> <nil>}
I1005 20:36:58.392004 848852 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1005 20:36:58.542978 848852 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I1005 20:36:58.543087 848852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330869
I1005 20:36:58.561956 848852 main.go:141] libmachine: Using SSH client type: native
I1005 20:36:58.562286 848852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil> [] 0s} 127.0.0.1 33383 <nil> <nil>}
I1005 20:36:58.562306 848852 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1005 20:36:59.329516 848852 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2023-09-04 12:30:15.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2023-10-05 20:36:58.539594498 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I1005 20:36:59.329548 848852 machine.go:91] provisioned docker machine in 4.767201085s
I1005 20:36:59.329560 848852 client.go:171] LocalClient.Create took 11.210532402s
I1005 20:36:59.329576 848852 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-330869" took 11.210602333s
I1005 20:36:59.329584 848852 start.go:300] post-start starting for "old-k8s-version-330869" (driver="docker")
I1005 20:36:59.329604 848852 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1005 20:36:59.329676 848852 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1005 20:36:59.329723 848852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330869
I1005 20:36:59.349346 848852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/old-k8s-version-330869/id_rsa Username:docker}
I1005 20:36:59.447055 848852 ssh_runner.go:195] Run: cat /etc/os-release
I1005 20:36:59.450698 848852 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1005 20:36:59.450735 848852 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1005 20:36:59.450744 848852 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1005 20:36:59.450751 848852 info.go:137] Remote host: Ubuntu 22.04.3 LTS
I1005 20:36:59.450768 848852 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-491115/.minikube/addons for local assets ...
I1005 20:36:59.450869 848852 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-491115/.minikube/files for local assets ...
I1005 20:36:59.450940 848852 filesync.go:149] local asset: /home/jenkins/minikube-integration/17363-491115/.minikube/files/etc/ssl/certs/4979262.pem -> 4979262.pem in /etc/ssl/certs
I1005 20:36:59.451025 848852 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1005 20:36:59.460458 848852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/files/etc/ssl/certs/4979262.pem --> /etc/ssl/certs/4979262.pem (1708 bytes)
I1005 20:36:59.485789 848852 start.go:303] post-start completed in 156.188398ms
I1005 20:36:59.486213 848852 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-330869
I1005 20:36:59.505013 848852 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/config.json ...
I1005 20:36:59.505378 848852 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1005 20:36:59.505437 848852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330869
I1005 20:36:59.524048 848852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/old-k8s-version-330869/id_rsa Username:docker}
I1005 20:36:59.618225 848852 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1005 20:36:59.622843 848852 start.go:128] duration metric: createHost completed in 11.506445418s
I1005 20:36:59.622871 848852 start.go:83] releasing machines lock for "old-k8s-version-330869", held for 11.506615462s
I1005 20:36:59.622945 848852 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-330869
I1005 20:36:59.642431 848852 ssh_runner.go:195] Run: cat /version.json
I1005 20:36:59.642495 848852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330869
I1005 20:36:59.642432 848852 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1005 20:36:59.642605 848852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330869
I1005 20:36:59.662581 848852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/old-k8s-version-330869/id_rsa Username:docker}
I1005 20:36:59.662737 848852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/old-k8s-version-330869/id_rsa Username:docker}
I1005 20:36:59.853461 848852 ssh_runner.go:195] Run: systemctl --version
I1005 20:36:59.858207 848852 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I1005 20:36:59.862990 848852 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I1005 20:36:59.889483 848852 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I1005 20:36:59.889588 848852 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
I1005 20:36:59.906500 848852 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
I1005 20:36:59.923250 848852 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1005 20:36:59.923285 848852 start.go:469] detecting cgroup driver to use...
I1005 20:36:59.923323 848852 detect.go:196] detected "cgroupfs" cgroup driver on host os
I1005 20:36:59.923474 848852 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1005 20:36:59.939990 848852 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
I1005 20:36:59.950912 848852 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1005 20:36:59.961778 848852 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I1005 20:36:59.961837 848852 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1005 20:36:59.973102 848852 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1005 20:36:59.983067 848852 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1005 20:36:59.993850 848852 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1005 20:37:00.005096 848852 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1005 20:37:00.014781 848852 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1005 20:37:00.025703 848852 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1005 20:37:00.034265 848852 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1005 20:37:00.044006 848852 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1005 20:37:00.133786 848852 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1005 20:37:00.239384 848852 start.go:469] detecting cgroup driver to use...
I1005 20:37:00.239442 848852 detect.go:196] detected "cgroupfs" cgroup driver on host os
I1005 20:37:00.239500 848852 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1005 20:37:00.252124 848852 cruntime.go:277] skipping containerd shutdown because we are bound to it
I1005 20:37:00.252191 848852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1005 20:37:00.266062 848852 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I1005 20:37:00.286402 848852 ssh_runner.go:195] Run: which cri-dockerd
I1005 20:37:00.292344 848852 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I1005 20:37:00.320044 848852 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I1005 20:37:00.339050 848852 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1005 20:37:00.446290 848852 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1005 20:37:00.547903 848852 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
I1005 20:37:00.548059 848852 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I1005 20:37:00.567151 848852 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1005 20:37:00.649892 848852 ssh_runner.go:195] Run: sudo systemctl restart docker
I1005 20:37:00.910258 848852 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1005 20:37:00.936286 848852 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1005 20:37:00.969093 848852 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 24.0.6 ...
I1005 20:37:00.969197 848852 cli_runner.go:164] Run: docker network inspect old-k8s-version-330869 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1005 20:37:00.987767 848852 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I1005 20:37:00.991940 848852 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1005 20:37:01.003986 848852 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
I1005 20:37:01.004064 848852 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1005 20:37:01.024559 848852 docker.go:664] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/kube-apiserver:v1.16.0
k8s.gcr.io/kube-proxy:v1.16.0
k8s.gcr.io/kube-controller-manager:v1.16.0
k8s.gcr.io/kube-scheduler:v1.16.0
k8s.gcr.io/etcd:3.3.15-0
k8s.gcr.io/coredns:1.6.2
k8s.gcr.io/pause:3.1
-- /stdout --
I1005 20:37:01.024582 848852 docker.go:670] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
I1005 20:37:01.024625 848852 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I1005 20:37:01.034897 848852 ssh_runner.go:195] Run: which lz4
I1005 20:37:01.039150 848852 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I1005 20:37:01.042853 848852 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I1005 20:37:01.042881 848852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
I1005 20:37:01.954367 848852 docker.go:628] Took 0.915253 seconds to copy over tarball
I1005 20:37:01.954442 848852 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I1005 20:37:04.201575 848852 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.247097174s)
I1005 20:37:04.201615 848852 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1005 20:37:04.268945 848852 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I1005 20:37:04.277399 848852 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
I1005 20:37:04.295027 848852 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1005 20:37:04.374758 848852 ssh_runner.go:195] Run: sudo systemctl restart docker
I1005 20:37:07.041735 848852 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.666927518s)
I1005 20:37:07.041868 848852 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1005 20:37:07.062423 848852 docker.go:664] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/kube-apiserver:v1.16.0
k8s.gcr.io/kube-controller-manager:v1.16.0
k8s.gcr.io/kube-proxy:v1.16.0
k8s.gcr.io/kube-scheduler:v1.16.0
k8s.gcr.io/etcd:3.3.15-0
k8s.gcr.io/coredns:1.6.2
k8s.gcr.io/pause:3.1
-- /stdout --
I1005 20:37:07.062449 848852 docker.go:670] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
I1005 20:37:07.062459 848852 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
I1005 20:37:07.063895 848852 image.go:134] retrieving image: registry.k8s.io/pause:3.1
I1005 20:37:07.067879 848852 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
I1005 20:37:07.067905 848852 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
I1005 20:37:07.067879 848852 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
I1005 20:37:07.067879 848852 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
I1005 20:37:07.067880 848852 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
I1005 20:37:07.067884 848852 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I1005 20:37:07.067880 848852 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
I1005 20:37:07.068476 848852 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
I1005 20:37:07.068802 848852 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
I1005 20:37:07.068815 848852 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
I1005 20:37:07.068891 848852 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
I1005 20:37:07.068903 848852 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
I1005 20:37:07.068955 848852 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
I1005 20:37:07.068977 848852 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
I1005 20:37:07.068907 848852 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I1005 20:37:07.235266 848852 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
I1005 20:37:07.240360 848852 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
I1005 20:37:07.249270 848852 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
I1005 20:37:07.257629 848852 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
I1005 20:37:07.257889 848852 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
I1005 20:37:07.257939 848852 docker.go:317] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
I1005 20:37:07.257981 848852 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
I1005 20:37:07.260526 848852 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
I1005 20:37:07.262168 848852 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
I1005 20:37:07.262219 848852 docker.go:317] Removing image: registry.k8s.io/etcd:3.3.15-0
I1005 20:37:07.262264 848852 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
I1005 20:37:07.273697 848852 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
I1005 20:37:07.275330 848852 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
I1005 20:37:07.275394 848852 docker.go:317] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
I1005 20:37:07.275448 848852 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
I1005 20:37:07.277427 848852 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
I1005 20:37:07.319579 848852 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
I1005 20:37:07.319640 848852 docker.go:317] Removing image: registry.k8s.io/kube-proxy:v1.16.0
I1005 20:37:07.319701 848852 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
I1005 20:37:07.319825 848852 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-491115/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
I1005 20:37:07.322936 848852 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
I1005 20:37:07.322996 848852 docker.go:317] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
I1005 20:37:07.323038 848852 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
I1005 20:37:07.323053 848852 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-491115/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
I1005 20:37:07.331388 848852 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
I1005 20:37:07.331467 848852 docker.go:317] Removing image: registry.k8s.io/pause:3.1
I1005 20:37:07.331515 848852 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
I1005 20:37:07.340975 848852 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-491115/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
I1005 20:37:07.346081 848852 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
I1005 20:37:07.346161 848852 docker.go:317] Removing image: registry.k8s.io/coredns:1.6.2
I1005 20:37:07.346261 848852 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
I1005 20:37:07.347802 848852 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-491115/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
I1005 20:37:07.353891 848852 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-491115/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
I1005 20:37:07.355556 848852 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-491115/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
I1005 20:37:07.367929 848852 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-491115/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
I1005 20:37:07.382091 848852 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
I1005 20:37:07.429006 848852 cache_images.go:92] LoadImages completed in 366.527099ms
W1005 20:37:07.429128 848852 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17363-491115/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17363-491115/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
I1005 20:37:07.429252 848852 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I1005 20:37:07.489573 848852 cni.go:84] Creating CNI manager for ""
I1005 20:37:07.489598 848852 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I1005 20:37:07.489615 848852 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I1005 20:37:07.489636 848852 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-330869 NodeName:old-k8s-version-330869 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I1005 20:37:07.489779 848852 kubeadm.go:181] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "old-k8s-version-330869"
kubeletExtraArgs:
node-ip: 192.168.85.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: old-k8s-version-330869
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
listen-metrics-urls: http://127.0.0.1:2381,http://192.168.85.2:2381
kubernetesVersion: v1.16.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1005 20:37:07.489853 848852 kubeadm.go:976] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-330869 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-330869 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I1005 20:37:07.489899 848852 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
I1005 20:37:07.498525 848852 binaries.go:44] Found k8s binaries, skipping transfer
I1005 20:37:07.498593 848852 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1005 20:37:07.507319 848852 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
I1005 20:37:07.526216 848852 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1005 20:37:07.545140 848852 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
I1005 20:37:07.564649 848852 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I1005 20:37:07.568218 848852 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1005 20:37:07.579493 848852 certs.go:56] Setting up /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869 for IP: 192.168.85.2
I1005 20:37:07.579548 848852 certs.go:190] acquiring lock for shared ca certs: {Name:mka6627fa5c31076c5fa233a6bbda946476bff5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1005 20:37:07.579720 848852 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17363-491115/.minikube/ca.key
I1005 20:37:07.579771 848852 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17363-491115/.minikube/proxy-client-ca.key
I1005 20:37:07.579831 848852 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/client.key
I1005 20:37:07.579853 848852 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/client.crt with IP's: []
I1005 20:37:07.958797 848852 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/client.crt ...
I1005 20:37:07.958830 848852 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/client.crt: {Name:mk4ed8648d0b7843797ac83f4b98a7e432949205 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1005 20:37:07.958989 848852 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/client.key ...
I1005 20:37:07.959003 848852 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/client.key: {Name:mkdeeda61d9b948461727c2c9411c560d4602d79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1005 20:37:07.959098 848852 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/apiserver.key.43b9df8c
I1005 20:37:07.959122 848852 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/apiserver.crt.43b9df8c with IP's: [192.168.85.2 10.96.0.1 127.0.0.1 10.0.0.1]
I1005 20:37:08.028205 848852 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/apiserver.crt.43b9df8c ...
I1005 20:37:08.028238 848852 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/apiserver.crt.43b9df8c: {Name:mk190cd886cb88264c237696eef655abb98bca69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1005 20:37:08.028438 848852 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/apiserver.key.43b9df8c ...
I1005 20:37:08.028458 848852 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/apiserver.key.43b9df8c: {Name:mk0015e0633a7ac62eded2fa85365447422119b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1005 20:37:08.028553 848852 certs.go:337] copying /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/apiserver.crt.43b9df8c -> /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/apiserver.crt
I1005 20:37:08.028647 848852 certs.go:341] copying /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/apiserver.key.43b9df8c -> /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/apiserver.key
I1005 20:37:08.028722 848852 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/proxy-client.key
I1005 20:37:08.028743 848852 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/proxy-client.crt with IP's: []
I1005 20:37:08.294258 848852 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/proxy-client.crt ...
I1005 20:37:08.294297 848852 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/proxy-client.crt: {Name:mkf2ca41570659a17afc4d42fd4914df945ce32f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1005 20:37:08.294486 848852 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/proxy-client.key ...
I1005 20:37:08.294503 848852 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/proxy-client.key: {Name:mkc674e75c0a5d5cc7a649ee713bd5202b448a5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1005 20:37:08.294733 848852 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/home/jenkins/minikube-integration/17363-491115/.minikube/certs/497926.pem (1338 bytes)
W1005 20:37:08.294776 848852 certs.go:433] ignoring /home/jenkins/minikube-integration/17363-491115/.minikube/certs/home/jenkins/minikube-integration/17363-491115/.minikube/certs/497926_empty.pem, impossibly tiny 0 bytes
I1005 20:37:08.294788 848852 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca-key.pem (1671 bytes)
I1005 20:37:08.294825 848852 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca.pem (1082 bytes)
I1005 20:37:08.294854 848852 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/home/jenkins/minikube-integration/17363-491115/.minikube/certs/cert.pem (1123 bytes)
I1005 20:37:08.294881 848852 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/home/jenkins/minikube-integration/17363-491115/.minikube/certs/key.pem (1679 bytes)
I1005 20:37:08.294952 848852 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-491115/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17363-491115/.minikube/files/etc/ssl/certs/4979262.pem (1708 bytes)
I1005 20:37:08.295568 848852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I1005 20:37:08.330946 848852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1005 20:37:08.379904 848852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1005 20:37:08.408268 848852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1005 20:37:08.433341 848852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1005 20:37:08.460942 848852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1005 20:37:08.488803 848852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1005 20:37:08.570596 848852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1005 20:37:08.596930 848852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/certs/497926.pem --> /usr/share/ca-certificates/497926.pem (1338 bytes)
I1005 20:37:08.621626 848852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/files/etc/ssl/certs/4979262.pem --> /usr/share/ca-certificates/4979262.pem (1708 bytes)
I1005 20:37:08.712387 848852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1005 20:37:08.738473 848852 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1005 20:37:08.757930 848852 ssh_runner.go:195] Run: openssl version
I1005 20:37:08.764376 848852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1005 20:37:08.775051 848852 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1005 20:37:08.778692 848852 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 5 20:03 /usr/share/ca-certificates/minikubeCA.pem
I1005 20:37:08.778758 848852 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1005 20:37:08.785976 848852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1005 20:37:08.796153 848852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/497926.pem && ln -fs /usr/share/ca-certificates/497926.pem /etc/ssl/certs/497926.pem"
I1005 20:37:08.805780 848852 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/497926.pem
I1005 20:37:08.809761 848852 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 5 20:07 /usr/share/ca-certificates/497926.pem
I1005 20:37:08.809820 848852 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/497926.pem
I1005 20:37:08.817484 848852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/497926.pem /etc/ssl/certs/51391683.0"
I1005 20:37:08.827242 848852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4979262.pem && ln -fs /usr/share/ca-certificates/4979262.pem /etc/ssl/certs/4979262.pem"
I1005 20:37:08.839332 848852 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4979262.pem
I1005 20:37:08.843366 848852 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 5 20:07 /usr/share/ca-certificates/4979262.pem
I1005 20:37:08.843417 848852 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4979262.pem
I1005 20:37:08.851212 848852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4979262.pem /etc/ssl/certs/3ec20f2e.0"
I1005 20:37:08.862753 848852 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
I1005 20:37:08.866859 848852 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
I1005 20:37:08.866926 848852 kubeadm.go:404] StartCluster: {Name:old-k8s-version-330869 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-330869 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
I1005 20:37:08.867049 848852 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1005 20:37:08.887607 848852 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1005 20:37:08.897145 848852 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1005 20:37:08.908022 848852 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I1005 20:37:08.908088 848852 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1005 20:37:08.919431 848852 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1005 20:37:08.919493 848852 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1005 20:37:08.992659 848852 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
I1005 20:37:08.992761 848852 kubeadm.go:322] [preflight] Running pre-flight checks
I1005 20:37:09.242033 848852 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
I1005 20:37:09.242121 848852 kubeadm.go:322] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1044-gcp[0m
I1005 20:37:09.242191 848852 kubeadm.go:322] [0;37mDOCKER_VERSION[0m: [0;32m24.0.6[0m
I1005 20:37:09.242244 848852 kubeadm.go:322] [0;37mOS[0m: [0;32mLinux[0m
I1005 20:37:09.242309 848852 kubeadm.go:322] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1005 20:37:09.242377 848852 kubeadm.go:322] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1005 20:37:09.242445 848852 kubeadm.go:322] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1005 20:37:09.242515 848852 kubeadm.go:322] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1005 20:37:09.242594 848852 kubeadm.go:322] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1005 20:37:09.242659 848852 kubeadm.go:322] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1005 20:37:09.382245 848852 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I1005 20:37:09.382418 848852 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1005 20:37:09.382547 848852 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I1005 20:37:09.679782 848852 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1005 20:37:09.681184 848852 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1005 20:37:09.688454 848852 kubeadm.go:322] [kubelet-start] Activating the kubelet service
I1005 20:37:09.801004 848852 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1005 20:37:09.804219 848852 out.go:204] - Generating certificates and keys ...
I1005 20:37:09.804349 848852 kubeadm.go:322] [certs] Using existing ca certificate authority
I1005 20:37:09.804438 848852 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I1005 20:37:10.263893 848852 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I1005 20:37:10.663748 848852 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I1005 20:37:11.011679 848852 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I1005 20:37:11.134670 848852 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I1005 20:37:11.266037 848852 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I1005 20:37:11.266251 848852 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-330869 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I1005 20:37:11.337048 848852 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
I1005 20:37:11.337384 848852 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-330869 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I1005 20:37:11.474296 848852 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
I1005 20:37:11.546459 848852 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
I1005 20:37:11.687001 848852 kubeadm.go:322] [certs] Generating "sa" key and public key
I1005 20:37:11.687129 848852 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1005 20:37:11.798486 848852 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I1005 20:37:11.870159 848852 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1005 20:37:11.982702 848852 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1005 20:37:12.166063 848852 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1005 20:37:12.167376 848852 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1005 20:37:12.170016 848852 out.go:204] - Booting up control plane ...
I1005 20:37:12.170180 848852 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1005 20:37:12.176340 848852 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1005 20:37:12.217622 848852 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1005 20:37:12.219576 848852 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1005 20:37:12.223059 848852 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I1005 20:37:22.225884 848852 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.002739 seconds
I1005 20:37:22.226037 848852 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1005 20:37:22.239720 848852 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
I1005 20:37:22.759159 848852 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
I1005 20:37:22.759416 848852 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-330869 as control-plane by adding the label "node-role.kubernetes.io/master=''"
I1005 20:37:23.266801 848852 kubeadm.go:322] [bootstrap-token] Using token: tirqp6.puzpp2xudnf7zigi
I1005 20:37:23.268588 848852 out.go:204] - Configuring RBAC rules ...
I1005 20:37:23.268746 848852 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1005 20:37:23.272113 848852 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1005 20:37:23.276172 848852 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1005 20:37:23.278638 848852 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1005 20:37:23.281472 848852 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1005 20:37:23.337015 848852 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
I1005 20:37:23.681202 848852 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
I1005 20:37:23.682945 848852 kubeadm.go:322]
I1005 20:37:23.683077 848852 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
I1005 20:37:23.683110 848852 kubeadm.go:322]
I1005 20:37:23.683217 848852 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
I1005 20:37:23.683230 848852 kubeadm.go:322]
I1005 20:37:23.683261 848852 kubeadm.go:322] mkdir -p $HOME/.kube
I1005 20:37:23.683343 848852 kubeadm.go:322] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1005 20:37:23.683414 848852 kubeadm.go:322] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1005 20:37:23.683424 848852 kubeadm.go:322]
I1005 20:37:23.683489 848852 kubeadm.go:322] You should now deploy a pod network to the cluster.
I1005 20:37:23.683587 848852 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1005 20:37:23.683685 848852 kubeadm.go:322] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1005 20:37:23.683697 848852 kubeadm.go:322]
I1005 20:37:23.683810 848852 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
I1005 20:37:23.683907 848852 kubeadm.go:322] and service account keys on each node and then running the following as root:
I1005 20:37:23.683918 848852 kubeadm.go:322]
I1005 20:37:23.684023 848852 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token tirqp6.puzpp2xudnf7zigi \
I1005 20:37:23.684158 848852 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:1a3efe62433952af74d7dd241658b1c6e6ef634460498e5c06f52126617f7626 \
I1005 20:37:23.684198 848852 kubeadm.go:322] --control-plane
I1005 20:37:23.684207 848852 kubeadm.go:322]
I1005 20:37:23.684311 848852 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
I1005 20:37:23.684322 848852 kubeadm.go:322]
I1005 20:37:23.684421 848852 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token tirqp6.puzpp2xudnf7zigi \
I1005 20:37:23.684557 848852 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:1a3efe62433952af74d7dd241658b1c6e6ef634460498e5c06f52126617f7626
I1005 20:37:23.687172 848852 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I1005 20:37:23.687372 848852 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
I1005 20:37:23.687626 848852 kubeadm.go:322] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1044-gcp\n", err: exit status 1
I1005 20:37:23.687719 848852 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1005 20:37:23.687775 848852 cni.go:84] Creating CNI manager for ""
I1005 20:37:23.687807 848852 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I1005 20:37:23.687850 848852 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1005 20:37:23.687919 848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1005 20:37:23.687921 848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=300d55cee86053f5b4c7a654fc8e7b9d3c030d53 minikube.k8s.io/name=old-k8s-version-330869 minikube.k8s.io/updated_at=2023_10_05T20_37_23_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I1005 20:37:24.028458 848852 ops.go:34] apiserver oom_adj: -16
I1005 20:37:24.028564 848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1005 20:37:24.123626 848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1005 20:37:24.700769 848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1005 20:37:25.200541 848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1005 20:37:25.700504 848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1005 20:37:26.200936 848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1005 20:37:26.700944 848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1005 20:37:27.200381 848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1005 20:37:27.700490 848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1005 20:37:28.200469 848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1005 20:37:28.700957 848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1005 20:37:29.200664 848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1005 20:37:29.700412 848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1005 20:37:30.201008 848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1005 20:37:30.700269 848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1005 20:37:31.201263 848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1005 20:37:31.701305 848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1005 20:37:32.200694 848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1005 20:37:32.700372 848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1005 20:37:33.201358 848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1005 20:37:33.700533 848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1005 20:37:34.201302 848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1005 20:37:34.700298 848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1005 20:37:35.201106 848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1005 20:37:35.701378 848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1005 20:37:36.200933 848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1005 20:37:36.700418 848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1005 20:37:37.200435 848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1005 20:37:37.700370 848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1005 20:37:38.200992 848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1005 20:37:38.700590 848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1005 20:37:38.851796 848852 kubeadm.go:1081] duration metric: took 15.163932557s to wait for elevateKubeSystemPrivileges.
I1005 20:37:38.851833 848852 kubeadm.go:406] StartCluster complete in 29.984923035s
I1005 20:37:38.851852 848852 settings.go:142] acquiring lock: {Name:mk74c5e95d8c9fcaf06097e6d304129504752ec7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1005 20:37:38.851923 848852 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/17363-491115/kubeconfig
I1005 20:37:38.853026 848852 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-491115/kubeconfig: {Name:mkd6618cb8d42fbccf8ec108c3891f3e690ff249 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1005 20:37:38.900545 848852 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1005 20:37:38.900672 848852 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
I1005 20:37:38.900748 848852 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-330869"
I1005 20:37:38.900763 848852 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-330869"
I1005 20:37:38.900800 848852 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-330869"
I1005 20:37:38.900831 848852 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-330869"
I1005 20:37:38.900871 848852 host.go:66] Checking if "old-k8s-version-330869" exists ...
I1005 20:37:38.900834 848852 config.go:182] Loaded profile config "old-k8s-version-330869": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
I1005 20:37:38.901374 848852 cli_runner.go:164] Run: docker container inspect old-k8s-version-330869 --format={{.State.Status}}
I1005 20:37:38.901811 848852 cli_runner.go:164] Run: docker container inspect old-k8s-version-330869 --format={{.State.Status}}
I1005 20:37:38.928479 848852 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-330869"
I1005 20:37:38.978125 848852 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1005 20:37:38.994806 848852 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1005 20:37:38.994831 848852 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1005 20:37:38.994905 848852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330869
I1005 20:37:38.978173 848852 host.go:66] Checking if "old-k8s-version-330869" exists ...
I1005 20:37:38.940356 848852 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-330869" context rescaled to 1 replicas
I1005 20:37:38.995184 848852 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I1005 20:37:38.997785 848852 out.go:177] * Verifying Kubernetes components...
I1005 20:37:38.995670 848852 cli_runner.go:164] Run: docker container inspect old-k8s-version-330869 --format={{.State.Status}}
I1005 20:37:39.002134 848852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1005 20:37:39.033883 848852 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.85.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1005 20:37:39.035485 848852 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-330869" to be "Ready" ...
I1005 20:37:39.038127 848852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/old-k8s-version-330869/id_rsa Username:docker}
I1005 20:37:39.042155 848852 node_ready.go:49] node "old-k8s-version-330869" has status "Ready":"True"
I1005 20:37:39.042182 848852 node_ready.go:38] duration metric: took 6.650275ms waiting for node "old-k8s-version-330869" to be "Ready" ...
I1005 20:37:39.042195 848852 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1005 20:37:39.050325 848852 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace to be "Ready" ...
I1005 20:37:39.063872 848852 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
I1005 20:37:39.063896 848852 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1005 20:37:39.063957 848852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330869
I1005 20:37:39.093129 848852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/old-k8s-version-330869/id_rsa Username:docker}
I1005 20:37:39.249557 848852 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1005 20:37:39.355969 848852 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1005 20:37:39.825436 848852 start.go:923] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
I1005 20:37:40.330580 848852 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I1005 20:37:40.332220 848852 addons.go:502] enable addons completed in 1.431540104s: enabled=[storage-provisioner default-storageclass]
I1005 20:37:41.094031 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:37:43.593508 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:37:45.594037 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:37:48.093420 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:37:50.093978 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:37:52.216164 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:37:54.592609 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:37:56.592855 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:37:58.593029 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:38:00.593273 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:38:02.593404 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:38:04.593563 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:38:07.094365 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:38:09.593999 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:38:12.093492 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:38:14.093641 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:38:16.094034 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:38:18.592980 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:38:21.094297 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:38:23.095657 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:38:25.593120 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:38:27.593680 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:38:30.093875 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:38:32.593912 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:38:35.094614 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:38:37.592828 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:38:39.593194 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:38:41.593355 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:38:44.093479 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:38:46.093787 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:38:48.592664 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:38:50.593150 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:38:52.593180 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:38:55.093045 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:38:57.592904 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:38:59.594150 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:39:02.094172 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:39:04.593193 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:39:06.593519 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:39:09.093086 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:39:11.093745 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:39:13.593125 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:39:16.092776 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:39:18.093398 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:39:20.093745 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:39:22.096037 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:39:24.593732 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:39:26.594148 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:39:29.092714 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:39:31.093472 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:39:33.593280 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:39:35.593425 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:39:37.593563 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:39:39.593680 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:39:42.093105 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:39:44.592896 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:39:46.593130 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:39:48.593726 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:39:51.093871 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:39:53.595038 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:39:56.092671 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:39:58.093532 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:40:00.093699 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:40:02.593675 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:40:05.092805 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:40:07.093455 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:40:09.093550 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:40:11.592886 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:40:13.593370 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:40:15.593878 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:40:17.594124 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:40:20.093466 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:40:22.592921 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:40:25.093952 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:40:27.593123 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:40:29.593520 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:40:31.593667 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:40:34.093053 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:40:36.592982 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:40:38.593683 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:40:41.093063 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:40:43.093386 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:40:45.093786 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:40:47.593705 848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
I1005 20:40:50.093878 848852 pod_ready.go:97] node "old-k8s-version-330869" hosting pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-330869" has status "Ready":"False"
I1005 20:40:50.093914 848852 pod_ready.go:81] duration metric: took 3m11.043547943s waiting for pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace to be "Ready" ...
E1005 20:40:50.093927 848852 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-330869" hosting pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-330869" has status "Ready":"False"
I1005 20:40:50.093939 848852 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-wmjhd" in "kube-system" namespace to be "Ready" ...
I1005 20:40:50.095804 848852 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-wmjhd" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-wmjhd" not found
I1005 20:40:50.095829 848852 pod_ready.go:81] duration metric: took 1.881885ms waiting for pod "coredns-5644d7b6d9-wmjhd" in "kube-system" namespace to be "Ready" ...
E1005 20:40:50.095838 848852 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-wmjhd" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-wmjhd" not found
I1005 20:40:50.095844 848852 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-n9cwb" in "kube-system" namespace to be "Ready" ...
I1005 20:40:50.099963 848852 pod_ready.go:97] node "old-k8s-version-330869" hosting pod "kube-proxy-n9cwb" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-330869" has status "Ready":"False"
I1005 20:40:50.099987 848852 pod_ready.go:81] duration metric: took 4.137428ms waiting for pod "kube-proxy-n9cwb" in "kube-system" namespace to be "Ready" ...
E1005 20:40:50.099995 848852 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-330869" hosting pod "kube-proxy-n9cwb" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-330869" has status "Ready":"False"
I1005 20:40:50.100000 848852 pod_ready.go:38] duration metric: took 3m11.057792461s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1005 20:40:50.100021 848852 api_server.go:52] waiting for apiserver process to appear ...
I1005 20:40:50.100077 848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1005 20:40:50.119441 848852 logs.go:284] 1 containers: [91420fd2d357]
I1005 20:40:50.119509 848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1005 20:40:50.139691 848852 logs.go:284] 1 containers: [530e42b9f6c7]
I1005 20:40:50.139783 848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1005 20:40:50.159543 848852 logs.go:284] 1 containers: [9f0be3358486]
I1005 20:40:50.159617 848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1005 20:40:50.178626 848852 logs.go:284] 1 containers: [a576da8318f8]
I1005 20:40:50.178709 848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1005 20:40:50.199418 848852 logs.go:284] 1 containers: [cef84f5b51c4]
I1005 20:40:50.199509 848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1005 20:40:50.218722 848852 logs.go:284] 1 containers: [6c66019a6e01]
I1005 20:40:50.218816 848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I1005 20:40:50.238070 848852 logs.go:284] 0 containers: []
W1005 20:40:50.238096 848852 logs.go:286] No container was found matching "kindnet"
I1005 20:40:50.238112 848852 logs.go:123] Gathering logs for kube-scheduler [a576da8318f8] ...
I1005 20:40:50.238131 848852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a576da8318f8"
I1005 20:40:50.269507 848852 logs.go:123] Gathering logs for kube-proxy [cef84f5b51c4] ...
I1005 20:40:50.269545 848852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef84f5b51c4"
I1005 20:40:50.291677 848852 logs.go:123] Gathering logs for kube-controller-manager [6c66019a6e01] ...
I1005 20:40:50.291716 848852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66019a6e01"
I1005 20:40:50.326025 848852 logs.go:123] Gathering logs for kubelet ...
I1005 20:40:50.326065 848852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1005 20:40:50.379350 848852 logs.go:123] Gathering logs for dmesg ...
I1005 20:40:50.379395 848852 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1005 20:40:50.406532 848852 logs.go:123] Gathering logs for kube-apiserver [91420fd2d357] ...
I1005 20:40:50.406577 848852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91420fd2d357"
I1005 20:40:50.437288 848852 logs.go:123] Gathering logs for Docker ...
I1005 20:40:50.437326 848852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I1005 20:40:50.455844 848852 logs.go:123] Gathering logs for container status ...
I1005 20:40:50.455880 848852 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1005 20:40:50.498500 848852 logs.go:123] Gathering logs for describe nodes ...
I1005 20:40:50.498530 848852 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1005 20:40:50.595652 848852 logs.go:123] Gathering logs for etcd [530e42b9f6c7] ...
I1005 20:40:50.595696 848852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 530e42b9f6c7"
I1005 20:40:50.620275 848852 logs.go:123] Gathering logs for coredns [9f0be3358486] ...
I1005 20:40:50.620312 848852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0be3358486"
I1005 20:40:53.146871 848852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1005 20:40:53.159851 848852 api_server.go:72] duration metric: took 3m14.164619226s to wait for apiserver process to appear ...
I1005 20:40:53.159878 848852 api_server.go:88] waiting for apiserver healthz status ...
I1005 20:40:53.159968 848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1005 20:40:53.180019 848852 logs.go:284] 1 containers: [91420fd2d357]
I1005 20:40:53.180088 848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1005 20:40:53.199435 848852 logs.go:284] 1 containers: [530e42b9f6c7]
I1005 20:40:53.199525 848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1005 20:40:53.218828 848852 logs.go:284] 1 containers: [9f0be3358486]
I1005 20:40:53.218918 848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1005 20:40:53.238489 848852 logs.go:284] 1 containers: [a576da8318f8]
I1005 20:40:53.238558 848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1005 20:40:53.258800 848852 logs.go:284] 1 containers: [cef84f5b51c4]
I1005 20:40:53.258880 848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1005 20:40:53.278841 848852 logs.go:284] 1 containers: [6c66019a6e01]
I1005 20:40:53.278928 848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I1005 20:40:53.298156 848852 logs.go:284] 0 containers: []
W1005 20:40:53.298184 848852 logs.go:286] No container was found matching "kindnet"
I1005 20:40:53.298203 848852 logs.go:123] Gathering logs for kube-apiserver [91420fd2d357] ...
I1005 20:40:53.298228 848852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91420fd2d357"
I1005 20:40:53.328555 848852 logs.go:123] Gathering logs for kube-scheduler [a576da8318f8] ...
I1005 20:40:53.328591 848852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a576da8318f8"
I1005 20:40:53.356192 848852 logs.go:123] Gathering logs for kube-controller-manager [6c66019a6e01] ...
I1005 20:40:53.356236 848852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66019a6e01"
I1005 20:40:53.408101 848852 logs.go:123] Gathering logs for describe nodes ...
I1005 20:40:53.408142 848852 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1005 20:40:53.504189 848852 logs.go:123] Gathering logs for dmesg ...
I1005 20:40:53.504223 848852 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1005 20:40:53.530244 848852 logs.go:123] Gathering logs for etcd [530e42b9f6c7] ...
I1005 20:40:53.530279 848852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 530e42b9f6c7"
I1005 20:40:53.554335 848852 logs.go:123] Gathering logs for coredns [9f0be3358486] ...
I1005 20:40:53.554369 848852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0be3358486"
I1005 20:40:53.575206 848852 logs.go:123] Gathering logs for kube-proxy [cef84f5b51c4] ...
I1005 20:40:53.575234 848852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef84f5b51c4"
I1005 20:40:53.597030 848852 logs.go:123] Gathering logs for Docker ...
I1005 20:40:53.597062 848852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I1005 20:40:53.614535 848852 logs.go:123] Gathering logs for container status ...
I1005 20:40:53.614572 848852 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1005 20:40:53.654139 848852 logs.go:123] Gathering logs for kubelet ...
I1005 20:40:53.654172 848852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1005 20:40:56.213313 848852 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1005 20:40:56.218410 848852 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
ok
I1005 20:40:56.219267 848852 api_server.go:141] control plane version: v1.16.0
I1005 20:40:56.219291 848852 api_server.go:131] duration metric: took 3.059406313s to wait for apiserver health ...
I1005 20:40:56.219299 848852 system_pods.go:43] waiting for kube-system pods to appear ...
I1005 20:40:56.219366 848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1005 20:40:56.238449 848852 logs.go:284] 1 containers: [91420fd2d357]
I1005 20:40:56.238527 848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1005 20:40:56.258629 848852 logs.go:284] 1 containers: [530e42b9f6c7]
I1005 20:40:56.258720 848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1005 20:40:56.278969 848852 logs.go:284] 1 containers: [9f0be3358486]
I1005 20:40:56.279060 848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1005 20:40:56.299094 848852 logs.go:284] 1 containers: [a576da8318f8]
I1005 20:40:56.299162 848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1005 20:40:56.318944 848852 logs.go:284] 1 containers: [cef84f5b51c4]
I1005 20:40:56.319016 848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1005 20:40:56.338209 848852 logs.go:284] 1 containers: [6c66019a6e01]
I1005 20:40:56.338283 848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I1005 20:40:56.357954 848852 logs.go:284] 0 containers: []
W1005 20:40:56.357976 848852 logs.go:286] No container was found matching "kindnet"
I1005 20:40:56.358002 848852 logs.go:123] Gathering logs for dmesg ...
I1005 20:40:56.358017 848852 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1005 20:40:56.385551 848852 logs.go:123] Gathering logs for coredns [9f0be3358486] ...
I1005 20:40:56.385589 848852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0be3358486"
I1005 20:40:56.406776 848852 logs.go:123] Gathering logs for kube-scheduler [a576da8318f8] ...
I1005 20:40:56.406809 848852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a576da8318f8"
I1005 20:40:56.432902 848852 logs.go:123] Gathering logs for kube-proxy [cef84f5b51c4] ...
I1005 20:40:56.432934 848852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef84f5b51c4"
I1005 20:40:56.454562 848852 logs.go:123] Gathering logs for Docker ...
I1005 20:40:56.454590 848852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I1005 20:40:56.472931 848852 logs.go:123] Gathering logs for kubelet ...
I1005 20:40:56.472968 848852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1005 20:40:56.526518 848852 logs.go:123] Gathering logs for describe nodes ...
I1005 20:40:56.526560 848852 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1005 20:40:56.622248 848852 logs.go:123] Gathering logs for kube-apiserver [91420fd2d357] ...
I1005 20:40:56.622279 848852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91420fd2d357"
I1005 20:40:56.655051 848852 logs.go:123] Gathering logs for etcd [530e42b9f6c7] ...
I1005 20:40:56.655087 848852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 530e42b9f6c7"
I1005 20:40:56.678429 848852 logs.go:123] Gathering logs for kube-controller-manager [6c66019a6e01] ...
I1005 20:40:56.678461 848852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66019a6e01"
I1005 20:40:56.713931 848852 logs.go:123] Gathering logs for container status ...
I1005 20:40:56.713968 848852 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1005 20:40:59.261292 848852 system_pods.go:59] 7 kube-system pods found
I1005 20:40:59.261354 848852 system_pods.go:61] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1005 20:40:59.261362 848852 system_pods.go:61] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
I1005 20:40:59.261367 848852 system_pods.go:61] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
I1005 20:40:59.261373 848852 system_pods.go:61] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
I1005 20:40:59.261378 848852 system_pods.go:61] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1005 20:40:59.261383 848852 system_pods.go:61] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
I1005 20:40:59.261391 848852 system_pods.go:61] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1005 20:40:59.261404 848852 system_pods.go:74] duration metric: took 3.042098794s to wait for pod list to return data ...
I1005 20:40:59.261414 848852 default_sa.go:34] waiting for default service account to be created ...
I1005 20:40:59.263601 848852 default_sa.go:45] found service account: "default"
I1005 20:40:59.263627 848852 default_sa.go:55] duration metric: took 2.205092ms for default service account to be created ...
I1005 20:40:59.263637 848852 system_pods.go:116] waiting for k8s-apps to be running ...
I1005 20:40:59.266968 848852 system_pods.go:86] 7 kube-system pods found
I1005 20:40:59.266993 848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1005 20:40:59.267003 848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
I1005 20:40:59.267008 848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
I1005 20:40:59.267013 848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
I1005 20:40:59.267018 848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1005 20:40:59.267023 848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
I1005 20:40:59.267032 848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1005 20:40:59.267056 848852 retry.go:31] will retry after 300.792529ms: missing components: kube-dns, kube-proxy
I1005 20:40:59.572465 848852 system_pods.go:86] 7 kube-system pods found
I1005 20:40:59.572493 848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1005 20:40:59.572500 848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
I1005 20:40:59.572505 848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
I1005 20:40:59.572510 848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
I1005 20:40:59.572515 848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1005 20:40:59.572520 848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
I1005 20:40:59.572527 848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1005 20:40:59.572542 848852 retry.go:31] will retry after 328.691351ms: missing components: kube-dns, kube-proxy
I1005 20:40:59.906606 848852 system_pods.go:86] 7 kube-system pods found
I1005 20:40:59.906646 848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1005 20:40:59.906656 848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
I1005 20:40:59.906663 848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
I1005 20:40:59.906671 848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
I1005 20:40:59.906678 848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1005 20:40:59.906688 848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
I1005 20:40:59.906699 848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1005 20:40:59.906725 848852 retry.go:31] will retry after 343.915985ms: missing components: kube-dns, kube-proxy
I1005 20:41:00.254958 848852 system_pods.go:86] 7 kube-system pods found
I1005 20:41:00.254992 848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1005 20:41:00.255001 848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
I1005 20:41:00.255008 848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
I1005 20:41:00.255017 848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
I1005 20:41:00.255025 848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1005 20:41:00.255033 848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
I1005 20:41:00.255043 848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1005 20:41:00.255068 848852 retry.go:31] will retry after 518.63445ms: missing components: kube-dns, kube-proxy
I1005 20:41:00.778717 848852 system_pods.go:86] 7 kube-system pods found
I1005 20:41:00.778748 848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1005 20:41:00.778756 848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
I1005 20:41:00.778761 848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
I1005 20:41:00.778767 848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
I1005 20:41:00.778773 848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1005 20:41:00.778778 848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
I1005 20:41:00.778784 848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1005 20:41:00.778800 848852 retry.go:31] will retry after 562.821701ms: missing components: kube-dns, kube-proxy
I1005 20:41:01.346346 848852 system_pods.go:86] 7 kube-system pods found
I1005 20:41:01.346375 848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1005 20:41:01.346382 848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
I1005 20:41:01.346387 848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
I1005 20:41:01.346393 848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
I1005 20:41:01.346398 848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1005 20:41:01.346405 848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
I1005 20:41:01.346411 848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1005 20:41:01.346428 848852 retry.go:31] will retry after 650.216203ms: missing components: kube-dns, kube-proxy
I1005 20:41:02.002459 848852 system_pods.go:86] 7 kube-system pods found
I1005 20:41:02.002570 848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1005 20:41:02.002590 848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
I1005 20:41:02.002608 848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
I1005 20:41:02.002624 848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
I1005 20:41:02.002648 848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1005 20:41:02.002664 848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
I1005 20:41:02.002683 848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1005 20:41:02.002711 848852 retry.go:31] will retry after 760.00556ms: missing components: kube-dns, kube-proxy
I1005 20:41:02.766915 848852 system_pods.go:86] 7 kube-system pods found
I1005 20:41:02.766945 848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1005 20:41:02.766953 848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
I1005 20:41:02.766958 848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
I1005 20:41:02.766963 848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
I1005 20:41:02.766969 848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1005 20:41:02.766974 848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
I1005 20:41:02.766981 848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1005 20:41:02.766998 848852 retry.go:31] will retry after 1.096256845s: missing components: kube-dns, kube-proxy
I1005 20:41:03.868393 848852 system_pods.go:86] 7 kube-system pods found
I1005 20:41:03.868432 848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1005 20:41:03.868441 848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
I1005 20:41:03.868448 848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
I1005 20:41:03.868456 848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
I1005 20:41:03.868463 848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1005 20:41:03.868472 848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
I1005 20:41:03.868483 848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1005 20:41:03.868508 848852 retry.go:31] will retry after 1.275861458s: missing components: kube-dns, kube-proxy
I1005 20:41:05.148320 848852 system_pods.go:86] 7 kube-system pods found
I1005 20:41:05.148350 848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1005 20:41:05.148357 848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
I1005 20:41:05.148362 848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
I1005 20:41:05.148367 848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
I1005 20:41:05.148373 848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1005 20:41:05.148379 848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
I1005 20:41:05.148385 848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1005 20:41:05.148402 848852 retry.go:31] will retry after 1.401487372s: missing components: kube-dns, kube-proxy
I1005 20:41:06.554819 848852 system_pods.go:86] 7 kube-system pods found
I1005 20:41:06.554857 848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1005 20:41:06.554867 848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
I1005 20:41:06.554878 848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
I1005 20:41:06.554886 848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
I1005 20:41:06.554894 848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1005 20:41:06.554903 848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
I1005 20:41:06.554909 848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1005 20:41:06.554929 848852 retry.go:31] will retry after 1.850633234s: missing components: kube-dns, kube-proxy
I1005 20:41:08.410662 848852 system_pods.go:86] 7 kube-system pods found
I1005 20:41:08.410692 848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1005 20:41:08.410699 848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
I1005 20:41:08.410704 848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
I1005 20:41:08.410709 848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
I1005 20:41:08.410715 848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1005 20:41:08.410720 848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
I1005 20:41:08.410726 848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1005 20:41:08.410742 848852 retry.go:31] will retry after 3.472865824s: missing components: kube-dns, kube-proxy
I1005 20:41:11.889408 848852 system_pods.go:86] 7 kube-system pods found
I1005 20:41:11.889447 848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1005 20:41:11.889455 848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
I1005 20:41:11.889460 848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
I1005 20:41:11.889465 848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
I1005 20:41:11.889471 848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1005 20:41:11.889476 848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
I1005 20:41:11.889483 848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1005 20:41:11.889500 848852 retry.go:31] will retry after 3.085936718s: missing components: kube-dns, kube-proxy
I1005 20:41:14.981245 848852 system_pods.go:86] 7 kube-system pods found
I1005 20:41:14.981284 848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1005 20:41:14.981295 848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
I1005 20:41:14.981304 848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
I1005 20:41:14.981313 848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
I1005 20:41:14.981325 848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1005 20:41:14.981336 848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
I1005 20:41:14.981347 848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1005 20:41:14.981366 848852 retry.go:31] will retry after 4.272914778s: missing components: kube-dns, kube-proxy
I1005 20:41:19.259463 848852 system_pods.go:86] 7 kube-system pods found
I1005 20:41:19.259496 848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1005 20:41:19.259503 848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
I1005 20:41:19.259509 848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
I1005 20:41:19.259513 848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
I1005 20:41:19.259519 848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1005 20:41:19.259524 848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
I1005 20:41:19.259530 848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1005 20:41:19.259549 848852 retry.go:31] will retry after 5.262882276s: missing components: kube-dns, kube-proxy
I1005 20:41:24.526746 848852 system_pods.go:86] 7 kube-system pods found
I1005 20:41:24.526779 848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1005 20:41:24.526786 848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
I1005 20:41:24.526792 848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
I1005 20:41:24.526796 848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
I1005 20:41:24.526801 848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1005 20:41:24.526806 848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
I1005 20:41:24.526812 848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1005 20:41:24.526831 848852 retry.go:31] will retry after 6.668638073s: missing components: kube-dns, kube-proxy
I1005 20:41:31.201287 848852 system_pods.go:86] 7 kube-system pods found
I1005 20:41:31.201327 848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1005 20:41:31.201337 848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
I1005 20:41:31.201346 848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
I1005 20:41:31.201353 848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
I1005 20:41:31.201360 848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1005 20:41:31.201369 848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
I1005 20:41:31.201383 848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1005 20:41:31.201407 848852 retry.go:31] will retry after 9.396673494s: missing components: kube-dns, kube-proxy
I1005 20:41:40.603038 848852 system_pods.go:86] 7 kube-system pods found
I1005 20:41:40.603071 848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1005 20:41:40.603078 848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
I1005 20:41:40.603085 848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
I1005 20:41:40.603090 848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
I1005 20:41:40.603096 848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1005 20:41:40.603101 848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
I1005 20:41:40.603137 848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1005 20:41:40.603156 848852 retry.go:31] will retry after 13.83982148s: missing components: kube-dns, kube-proxy
I1005 20:41:54.447269 848852 system_pods.go:86] 7 kube-system pods found
I1005 20:41:54.447300 848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1005 20:41:54.447307 848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
I1005 20:41:54.447315 848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
I1005 20:41:54.447320 848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
I1005 20:41:54.447325 848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1005 20:41:54.447330 848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
I1005 20:41:54.447336 848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1005 20:41:54.447351 848852 retry.go:31] will retry after 16.909017562s: missing components: kube-dns, kube-proxy
I1005 20:42:11.362760 848852 system_pods.go:86] 7 kube-system pods found
I1005 20:42:11.362798 848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1005 20:42:11.362808 848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
I1005 20:42:11.362816 848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
I1005 20:42:11.362824 848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
I1005 20:42:11.362833 848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1005 20:42:11.362844 848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
I1005 20:42:11.362857 848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1005 20:42:11.362886 848852 retry.go:31] will retry after 13.151324006s: missing components: kube-dns, kube-proxy
I1005 20:42:24.519701 848852 system_pods.go:86] 7 kube-system pods found
I1005 20:42:24.519745 848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1005 20:42:24.519756 848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
I1005 20:42:24.519764 848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
I1005 20:42:24.519771 848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
I1005 20:42:24.519777 848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1005 20:42:24.519784 848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
I1005 20:42:24.519800 848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1005 20:42:24.519823 848852 retry.go:31] will retry after 19.438415105s: missing components: kube-dns, kube-proxy
I1005 20:42:43.963102 848852 system_pods.go:86] 7 kube-system pods found
I1005 20:42:43.963137 848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1005 20:42:43.963145 848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
I1005 20:42:43.963150 848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
I1005 20:42:43.963154 848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
I1005 20:42:43.963160 848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1005 20:42:43.963165 848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
I1005 20:42:43.963171 848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1005 20:42:43.963192 848852 retry.go:31] will retry after 27.185744025s: missing components: kube-dns, kube-proxy
I1005 20:43:11.154216 848852 system_pods.go:86] 7 kube-system pods found
I1005 20:43:11.154250 848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1005 20:43:11.154258 848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
I1005 20:43:11.154263 848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
I1005 20:43:11.154269 848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
I1005 20:43:11.154274 848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1005 20:43:11.154281 848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
I1005 20:43:11.154287 848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1005 20:43:11.154303 848852 retry.go:31] will retry after 30.621447152s: missing components: kube-dns, kube-proxy
I1005 20:43:41.781018 848852 system_pods.go:86] 7 kube-system pods found
I1005 20:43:41.781059 848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1005 20:43:41.781067 848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
I1005 20:43:41.781072 848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
I1005 20:43:41.781078 848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
I1005 20:43:41.781085 848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1005 20:43:41.781091 848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
I1005 20:43:41.781101 848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1005 20:43:41.781125 848852 retry.go:31] will retry after 48.291810362s: missing components: kube-dns, kube-proxy
I1005 20:44:30.078532 848852 system_pods.go:86] 7 kube-system pods found
I1005 20:44:30.078565 848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1005 20:44:30.078577 848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
I1005 20:44:30.078585 848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
I1005 20:44:30.078593 848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
I1005 20:44:30.078602 848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1005 20:44:30.078609 848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
I1005 20:44:30.078619 848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1005 20:44:30.078642 848852 retry.go:31] will retry after 45.333697219s: missing components: kube-dns, kube-proxy
I1005 20:45:15.417486 848852 system_pods.go:86] 7 kube-system pods found
I1005 20:45:15.417531 848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1005 20:45:15.417542 848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
I1005 20:45:15.417550 848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
I1005 20:45:15.417558 848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
I1005 20:45:15.417565 848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1005 20:45:15.417579 848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
I1005 20:45:15.417589 848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1005 20:45:15.417621 848852 retry.go:31] will retry after 1m12.232820849s: missing components: kube-dns, kube-proxy
I1005 20:46:27.657681 848852 system_pods.go:86] 7 kube-system pods found
I1005 20:46:27.657726 848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1005 20:46:27.657736 848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
I1005 20:46:27.657741 848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
I1005 20:46:27.657747 848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
I1005 20:46:27.657753 848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1005 20:46:27.657758 848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
I1005 20:46:27.657766 848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1005 20:46:27.660128 848852 out.go:177]
W1005 20:46:27.662010 848852 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns, kube-proxy
X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns, kube-proxy
W1005 20:46:27.662024 848852 out.go:239] *
*
W1005 20:46:27.662801 848852 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1005 20:46:27.665143 848852 out.go:177]
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-330869 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=docker --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect old-k8s-version-330869
helpers_test.go:235: (dbg) docker inspect old-k8s-version-330869:
-- stdout --
[
{
"Id": "0ffb18ccd18d3a519fe433681e826678add5b991e61a9bda0dbfb9ea0dee4ec9",
"Created": "2023-10-05T20:36:53.706444621Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 850463,
"ExitCode": 0,
"Error": "",
"StartedAt": "2023-10-05T20:36:54.08354438Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:94671ba3754e2c6976414eaf20a0c7861a5d2f9fc631e1161e8ab0ded9062c52",
"ResolvConfPath": "/var/lib/docker/containers/0ffb18ccd18d3a519fe433681e826678add5b991e61a9bda0dbfb9ea0dee4ec9/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/0ffb18ccd18d3a519fe433681e826678add5b991e61a9bda0dbfb9ea0dee4ec9/hostname",
"HostsPath": "/var/lib/docker/containers/0ffb18ccd18d3a519fe433681e826678add5b991e61a9bda0dbfb9ea0dee4ec9/hosts",
"LogPath": "/var/lib/docker/containers/0ffb18ccd18d3a519fe433681e826678add5b991e61a9bda0dbfb9ea0dee4ec9/0ffb18ccd18d3a519fe433681e826678add5b991e61a9bda0dbfb9ea0dee4ec9-json.log",
"Name": "/old-k8s-version-330869",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"old-k8s-version-330869:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {
"max-size": "100m"
}
},
"NetworkMode": "old-k8s-version-330869",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 2306867200,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 4613734400,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/0d686d9896e444b3737621f3bb0399b6dd3266180ac56b4497ffd2325577edca-init/diff:/var/lib/docker/overlay2/e65b3f74dc6bfb6767eea300df98bf2be99245c1b234ea43800cf021cd81177d/diff",
"MergedDir": "/var/lib/docker/overlay2/0d686d9896e444b3737621f3bb0399b6dd3266180ac56b4497ffd2325577edca/merged",
"UpperDir": "/var/lib/docker/overlay2/0d686d9896e444b3737621f3bb0399b6dd3266180ac56b4497ffd2325577edca/diff",
"WorkDir": "/var/lib/docker/overlay2/0d686d9896e444b3737621f3bb0399b6dd3266180ac56b4497ffd2325577edca/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "old-k8s-version-330869",
"Source": "/var/lib/docker/volumes/old-k8s-version-330869/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "old-k8s-version-330869",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "old-k8s-version-330869",
"name.minikube.sigs.k8s.io": "old-k8s-version-330869",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "e9d5900763ffac860582f91e1cc24789bad5009ed40771fbeb5d999159eee780",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33383"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33382"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33379"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33381"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33380"
}
]
},
"SandboxKey": "/var/run/docker/netns/e9d5900763ff",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"old-k8s-version-330869": {
"IPAMConfig": {
"IPv4Address": "192.168.85.2"
},
"Links": null,
"Aliases": [
"0ffb18ccd18d",
"old-k8s-version-330869"
],
"NetworkID": "b2ec8c9cc8a493d14667efb735586eda5a96dcf492505b426d598dbb05a7c972",
"EndpointID": "75ea406fffba6499772cde5d775de2d6bc83b43060d83c230ed678cdae12bc5e",
"Gateway": "192.168.85.1",
"IPAddress": "192.168.85.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:55:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-330869 -n old-k8s-version-330869
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p old-k8s-version-330869 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/FirstStart logs:
-- stdout --
*
* ==> Audit <==
* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
| ssh | -p embed-certs-411409 sudo | embed-certs-411409 | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
| | crictl images -o json | | | | | |
| pause | -p embed-certs-411409 | embed-certs-411409 | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p embed-certs-411409 | embed-certs-411409 | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p embed-certs-411409 | embed-certs-411409 | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
| delete | -p embed-certs-411409 | embed-certs-411409 | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
| start | -p newest-cni-251602 --memory=2200 --alsologtostderr | newest-cni-251602 | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:45 UTC |
| | --wait=apiserver,system_pods,default_sa | | | | | |
| | --feature-gates ServerSideApply=true | | | | | |
| | --network-plugin=cni | | | | | |
| | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 | | | | | |
| | --driver=docker --container-runtime=docker | | | | | |
| | --kubernetes-version=v1.28.2 | | | | | |
| ssh | -p | default-k8s-diff-port-973002 | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
| | default-k8s-diff-port-973002 | | | | | |
| | sudo crictl images -o json | | | | | |
| pause | -p | default-k8s-diff-port-973002 | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
| | default-k8s-diff-port-973002 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p | default-k8s-diff-port-973002 | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
| | default-k8s-diff-port-973002 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p | default-k8s-diff-port-973002 | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
| | default-k8s-diff-port-973002 | | | | | |
| delete | -p | default-k8s-diff-port-973002 | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
| | default-k8s-diff-port-973002 | | | | | |
| ssh | -p no-preload-477708 sudo | no-preload-477708 | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
| | crictl images -o json | | | | | |
| pause | -p no-preload-477708 | no-preload-477708 | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p no-preload-477708 | no-preload-477708 | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p no-preload-477708 | no-preload-477708 | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
| delete | -p no-preload-477708 | no-preload-477708 | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
| addons | enable metrics-server -p newest-cni-251602 | newest-cni-251602 | jenkins | v1.31.2 | 05 Oct 23 20:45 UTC | 05 Oct 23 20:45 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p newest-cni-251602 | newest-cni-251602 | jenkins | v1.31.2 | 05 Oct 23 20:45 UTC | 05 Oct 23 20:45 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p newest-cni-251602 | newest-cni-251602 | jenkins | v1.31.2 | 05 Oct 23 20:45 UTC | 05 Oct 23 20:45 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p newest-cni-251602 --memory=2200 --alsologtostderr | newest-cni-251602 | jenkins | v1.31.2 | 05 Oct 23 20:45 UTC | 05 Oct 23 20:45 UTC |
| | --wait=apiserver,system_pods,default_sa | | | | | |
| | --feature-gates ServerSideApply=true | | | | | |
| | --network-plugin=cni | | | | | |
| | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 | | | | | |
| | --driver=docker --container-runtime=docker | | | | | |
| | --kubernetes-version=v1.28.2 | | | | | |
| ssh | -p newest-cni-251602 sudo | newest-cni-251602 | jenkins | v1.31.2 | 05 Oct 23 20:45 UTC | 05 Oct 23 20:45 UTC |
| | crictl images -o json | | | | | |
| pause | -p newest-cni-251602 | newest-cni-251602 | jenkins | v1.31.2 | 05 Oct 23 20:45 UTC | 05 Oct 23 20:45 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p newest-cni-251602 | newest-cni-251602 | jenkins | v1.31.2 | 05 Oct 23 20:45 UTC | 05 Oct 23 20:45 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p newest-cni-251602 | newest-cni-251602 | jenkins | v1.31.2 | 05 Oct 23 20:45 UTC | 05 Oct 23 20:45 UTC |
| delete | -p newest-cni-251602 | newest-cni-251602 | jenkins | v1.31.2 | 05 Oct 23 20:45 UTC | 05 Oct 23 20:45 UTC |
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/10/05 20:45:14
Running on machine: ubuntu-20-agent-14
Binary: Built with gc go1.21.1 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1005 20:45:14.405012 941739 out.go:296] Setting OutFile to fd 1 ...
I1005 20:45:14.405318 941739 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 20:45:14.405332 941739 out.go:309] Setting ErrFile to fd 2...
I1005 20:45:14.405338 941739 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 20:45:14.405563 941739 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-491115/.minikube/bin
I1005 20:45:14.406125 941739 out.go:303] Setting JSON to false
I1005 20:45:14.408036 941739 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8863,"bootTime":1696529852,"procs":691,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1005 20:45:14.408112 941739 start.go:138] virtualization: kvm guest
I1005 20:45:14.411041 941739 out.go:177] * [newest-cni-251602] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
I1005 20:45:14.412825 941739 out.go:177] - MINIKUBE_LOCATION=17363
I1005 20:45:14.414496 941739 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1005 20:45:14.412885 941739 notify.go:220] Checking for updates...
I1005 20:45:14.417488 941739 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/17363-491115/kubeconfig
I1005 20:45:14.419444 941739 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-491115/.minikube
I1005 20:45:14.420812 941739 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I1005 20:45:14.422387 941739 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I1005 20:45:14.424417 941739 config.go:182] Loaded profile config "newest-cni-251602": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1005 20:45:14.424920 941739 driver.go:378] Setting default libvirt URI to qemu:///system
I1005 20:45:14.447137 941739 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
I1005 20:45:14.447233 941739 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1005 20:45:14.502313 941739 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:55 SystemTime:2023-10-05 20:45:14.492746667 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1005 20:45:14.502465 941739 docker.go:294] overlay module found
I1005 20:45:14.504743 941739 out.go:177] * Using the docker driver based on existing profile
I1005 20:45:14.506376 941739 start.go:298] selected driver: docker
I1005 20:45:14.506399 941739 start.go:902] validating driver "docker" against &{Name:newest-cni-251602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-251602 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress:
Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
I1005 20:45:14.506507 941739 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1005 20:45:14.507273 941739 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1005 20:45:14.559655 941739 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:55 SystemTime:2023-10-05 20:45:14.550952004 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1005 20:45:14.560012 941739 start_flags.go:942] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
I1005 20:45:14.560046 941739 cni.go:84] Creating CNI manager for ""
I1005 20:45:14.560066 941739 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1005 20:45:14.560079 941739 start_flags.go:321] config:
{Name:newest-cni-251602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-251602 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
I1005 20:45:14.562326 941739 out.go:177] * Starting control plane node newest-cni-251602 in cluster newest-cni-251602
I1005 20:45:14.565495 941739 cache.go:122] Beginning downloading kic base image for docker with docker
I1005 20:45:14.567000 941739 out.go:177] * Pulling base image ...
I1005 20:45:14.568566 941739 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
I1005 20:45:14.568620 941739 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17363-491115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
I1005 20:45:14.568631 941739 cache.go:57] Caching tarball of preloaded images
I1005 20:45:14.568707 941739 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
I1005 20:45:14.568717 941739 preload.go:174] Found /home/jenkins/minikube-integration/17363-491115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1005 20:45:14.568791 941739 cache.go:60] Finished verifying existence of preloaded tar for v1.28.2 on docker
I1005 20:45:14.568916 941739 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/newest-cni-251602/config.json ...
I1005 20:45:14.586420 941739 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon, skipping pull
I1005 20:45:14.586452 941739 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in daemon, skipping load
I1005 20:45:14.586477 941739 cache.go:195] Successfully downloaded all kic artifacts
I1005 20:45:14.586522 941739 start.go:365] acquiring machines lock for newest-cni-251602: {Name:mkefe4baf7b8136c10dd9c20a98860ec3c495766 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1005 20:45:14.586596 941739 start.go:369] acquired machines lock for "newest-cni-251602" in 47.72µs
I1005 20:45:14.586622 941739 start.go:96] Skipping create...Using existing machine configuration
I1005 20:45:14.586642 941739 fix.go:54] fixHost starting:
I1005 20:45:14.587273 941739 cli_runner.go:164] Run: docker container inspect newest-cni-251602 --format={{.State.Status}}
I1005 20:45:14.605317 941739 fix.go:102] recreateIfNeeded on newest-cni-251602: state=Stopped err=<nil>
W1005 20:45:14.605354 941739 fix.go:128] unexpected machine state, will restart: <nil>
I1005 20:45:14.607609 941739 out.go:177] * Restarting existing docker container for "newest-cni-251602" ...
I1005 20:45:15.417486 848852 system_pods.go:86] 7 kube-system pods found
I1005 20:45:15.417531 848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1005 20:45:15.417542 848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
I1005 20:45:15.417550 848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
I1005 20:45:15.417558 848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
I1005 20:45:15.417565 848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1005 20:45:15.417579 848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
I1005 20:45:15.417589 848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1005 20:45:15.417621 848852 retry.go:31] will retry after 1m12.232820849s: missing components: kube-dns, kube-proxy
I1005 20:45:14.609066 941739 cli_runner.go:164] Run: docker start newest-cni-251602
I1005 20:45:14.897686 941739 cli_runner.go:164] Run: docker container inspect newest-cni-251602 --format={{.State.Status}}
I1005 20:45:14.916217 941739 kic.go:426] container "newest-cni-251602" state is running.
I1005 20:45:14.916594 941739 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-251602
I1005 20:45:14.935722 941739 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/newest-cni-251602/config.json ...
I1005 20:45:14.935987 941739 machine.go:88] provisioning docker machine ...
I1005 20:45:14.936015 941739 ubuntu.go:169] provisioning hostname "newest-cni-251602"
I1005 20:45:14.936080 941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
I1005 20:45:14.954269 941739 main.go:141] libmachine: Using SSH client type: native
I1005 20:45:14.954655 941739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil> [] 0s} 127.0.0.1 33423 <nil> <nil>}
I1005 20:45:14.954675 941739 main.go:141] libmachine: About to run SSH command:
sudo hostname newest-cni-251602 && echo "newest-cni-251602" | sudo tee /etc/hostname
I1005 20:45:14.955367 941739 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60694->127.0.0.1:33423: read: connection reset by peer
I1005 20:45:18.101383 941739 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-251602
I1005 20:45:18.101493 941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
I1005 20:45:18.118632 941739 main.go:141] libmachine: Using SSH client type: native
I1005 20:45:18.118970 941739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil> [] 0s} 127.0.0.1 33423 <nil> <nil>}
I1005 20:45:18.118988 941739 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\snewest-cni-251602' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-251602/g' /etc/hosts;
else
echo '127.0.1.1 newest-cni-251602' | sudo tee -a /etc/hosts;
fi
fi
I1005 20:45:18.254181 941739 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1005 20:45:18.254212 941739 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17363-491115/.minikube CaCertPath:/home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17363-491115/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17363-491115/.minikube}
I1005 20:45:18.254247 941739 ubuntu.go:177] setting up certificates
I1005 20:45:18.254259 941739 provision.go:83] configureAuth start
I1005 20:45:18.254314 941739 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-251602
I1005 20:45:18.271133 941739 provision.go:138] copyHostCerts
I1005 20:45:18.271209 941739 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-491115/.minikube/ca.pem, removing ...
I1005 20:45:18.271225 941739 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-491115/.minikube/ca.pem
I1005 20:45:18.271301 941739 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17363-491115/.minikube/ca.pem (1082 bytes)
I1005 20:45:18.271415 941739 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-491115/.minikube/cert.pem, removing ...
I1005 20:45:18.271430 941739 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-491115/.minikube/cert.pem
I1005 20:45:18.271455 941739 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17363-491115/.minikube/cert.pem (1123 bytes)
I1005 20:45:18.271518 941739 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-491115/.minikube/key.pem, removing ...
I1005 20:45:18.271526 941739 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-491115/.minikube/key.pem
I1005 20:45:18.271548 941739 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17363-491115/.minikube/key.pem (1679 bytes)
I1005 20:45:18.271607 941739 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17363-491115/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca-key.pem org=jenkins.newest-cni-251602 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-251602]
I1005 20:45:18.410529 941739 provision.go:172] copyRemoteCerts
I1005 20:45:18.410591 941739 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1005 20:45:18.410642 941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
I1005 20:45:18.427655 941739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/newest-cni-251602/id_rsa Username:docker}
I1005 20:45:18.525913 941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1005 20:45:18.548522 941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
I1005 20:45:18.571080 941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1005 20:45:18.594270 941739 provision.go:86] duration metric: configureAuth took 339.997588ms
I1005 20:45:18.594302 941739 ubuntu.go:193] setting minikube options for container-runtime
I1005 20:45:18.594515 941739 config.go:182] Loaded profile config "newest-cni-251602": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1005 20:45:18.594580 941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
I1005 20:45:18.611692 941739 main.go:141] libmachine: Using SSH client type: native
I1005 20:45:18.612072 941739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil> [] 0s} 127.0.0.1 33423 <nil> <nil>}
I1005 20:45:18.612089 941739 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1005 20:45:18.745964 941739 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I1005 20:45:18.745987 941739 ubuntu.go:71] root file system type: overlay
I1005 20:45:18.746127 941739 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I1005 20:45:18.746195 941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
I1005 20:45:18.763221 941739 main.go:141] libmachine: Using SSH client type: native
I1005 20:45:18.763676 941739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil> [] 0s} 127.0.0.1 33423 <nil> <nil>}
I1005 20:45:18.763773 941739 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1005 20:45:18.908747 941739 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I1005 20:45:18.908833 941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
I1005 20:45:18.927242 941739 main.go:141] libmachine: Using SSH client type: native
I1005 20:45:18.927586 941739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil> [] 0s} 127.0.0.1 33423 <nil> <nil>}
I1005 20:45:18.927612 941739 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1005 20:45:19.070807 941739 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1005 20:45:19.070845 941739 machine.go:91] provisioned docker machine in 4.134838843s
I1005 20:45:19.070863 941739 start.go:300] post-start starting for "newest-cni-251602" (driver="docker")
I1005 20:45:19.070880 941739 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1005 20:45:19.070965 941739 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1005 20:45:19.071034 941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
I1005 20:45:19.088361 941739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/newest-cni-251602/id_rsa Username:docker}
I1005 20:45:19.186060 941739 ssh_runner.go:195] Run: cat /etc/os-release
I1005 20:45:19.189266 941739 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1005 20:45:19.189348 941739 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1005 20:45:19.189371 941739 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1005 20:45:19.189382 941739 info.go:137] Remote host: Ubuntu 22.04.3 LTS
I1005 20:45:19.189396 941739 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-491115/.minikube/addons for local assets ...
I1005 20:45:19.189452 941739 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-491115/.minikube/files for local assets ...
I1005 20:45:19.189539 941739 filesync.go:149] local asset: /home/jenkins/minikube-integration/17363-491115/.minikube/files/etc/ssl/certs/4979262.pem -> 4979262.pem in /etc/ssl/certs
I1005 20:45:19.189654 941739 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1005 20:45:19.198001 941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/files/etc/ssl/certs/4979262.pem --> /etc/ssl/certs/4979262.pem (1708 bytes)
I1005 20:45:19.219671 941739 start.go:303] post-start completed in 148.789062ms
I1005 20:45:19.219760 941739 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1005 20:45:19.219819 941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
I1005 20:45:19.237287 941739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/newest-cni-251602/id_rsa Username:docker}
I1005 20:45:19.330407 941739 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1005 20:45:19.334776 941739 fix.go:56] fixHost completed within 4.748135457s
I1005 20:45:19.334813 941739 start.go:83] releasing machines lock for "newest-cni-251602", held for 4.7482043s
I1005 20:45:19.334891 941739 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-251602
I1005 20:45:19.351556 941739 ssh_runner.go:195] Run: cat /version.json
I1005 20:45:19.351608 941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
I1005 20:45:19.351662 941739 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1005 20:45:19.351741 941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
I1005 20:45:19.368619 941739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/newest-cni-251602/id_rsa Username:docker}
I1005 20:45:19.369076 941739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/newest-cni-251602/id_rsa Username:docker}
I1005 20:45:19.550177 941739 ssh_runner.go:195] Run: systemctl --version
I1005 20:45:19.554696 941739 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I1005 20:45:19.559119 941739 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I1005 20:45:19.576904 941739 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I1005 20:45:19.576985 941739 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1005 20:45:19.585375 941739 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I1005 20:45:19.585410 941739 start.go:469] detecting cgroup driver to use...
I1005 20:45:19.585444 941739 detect.go:196] detected "cgroupfs" cgroup driver on host os
I1005 20:45:19.585560 941739 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1005 20:45:19.600124 941739 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I1005 20:45:19.609154 941739 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1005 20:45:19.618149 941739 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I1005 20:45:19.618216 941739 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1005 20:45:19.627522 941739 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1005 20:45:19.636836 941739 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1005 20:45:19.646086 941739 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1005 20:45:19.655673 941739 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1005 20:45:19.664512 941739 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1005 20:45:19.674505 941739 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1005 20:45:19.682683 941739 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1005 20:45:19.691073 941739 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1005 20:45:19.769287 941739 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1005 20:45:19.852660 941739 start.go:469] detecting cgroup driver to use...
I1005 20:45:19.852792 941739 detect.go:196] detected "cgroupfs" cgroup driver on host os
I1005 20:45:19.852882 941739 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1005 20:45:19.864848 941739 cruntime.go:277] skipping containerd shutdown because we are bound to it
I1005 20:45:19.864918 941739 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1005 20:45:19.877630 941739 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I1005 20:45:19.895392 941739 ssh_runner.go:195] Run: which cri-dockerd
I1005 20:45:19.899661 941739 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I1005 20:45:19.918552 941739 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I1005 20:45:19.936911 941739 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1005 20:45:20.046865 941739 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1005 20:45:20.144163 941739 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
I1005 20:45:20.144299 941739 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I1005 20:45:20.161707 941739 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1005 20:45:20.251848 941739 ssh_runner.go:195] Run: sudo systemctl restart docker
I1005 20:45:20.520825 941739 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1005 20:45:20.605718 941739 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I1005 20:45:20.688963 941739 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1005 20:45:20.773512 941739 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1005 20:45:20.854013 941739 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I1005 20:45:20.867324 941739 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1005 20:45:20.946882 941739 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
I1005 20:45:21.017496 941739 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
I1005 20:45:21.017569 941739 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I1005 20:45:21.021797 941739 start.go:537] Will wait 60s for crictl version
I1005 20:45:21.021856 941739 ssh_runner.go:195] Run: which crictl
I1005 20:45:21.025426 941739 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1005 20:45:21.070905 941739 start.go:553] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 24.0.6
RuntimeApiVersion: v1
I1005 20:45:21.070975 941739 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1005 20:45:21.094936 941739 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1005 20:45:21.121912 941739 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
I1005 20:45:21.121999 941739 cli_runner.go:164] Run: docker network inspect newest-cni-251602 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1005 20:45:21.138556 941739 ssh_runner.go:195] Run: grep 192.168.67.1 host.minikube.internal$ /etc/hosts
I1005 20:45:21.142440 941739 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1005 20:45:21.154570 941739 out.go:177] - kubeadm.pod-network-cidr=10.42.0.0/16
I1005 20:45:21.157976 941739 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
I1005 20:45:21.158071 941739 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1005 20:45:21.178251 941739 docker.go:664] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/coredns/coredns:v1.10.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I1005 20:45:21.178278 941739 docker.go:594] Images already preloaded, skipping extraction
I1005 20:45:21.178347 941739 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1005 20:45:21.197723 941739 docker.go:664] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/coredns/coredns:v1.10.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I1005 20:45:21.197759 941739 cache_images.go:84] Images are preloaded, skipping loading
I1005 20:45:21.197823 941739 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I1005 20:45:21.251580 941739 cni.go:84] Creating CNI manager for ""
I1005 20:45:21.251616 941739 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1005 20:45:21.251639 941739 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
I1005 20:45:21.251658 941739 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-251602 NodeName:newest-cni-251602 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1005 20:45:21.251840 941739 kubeadm.go:181] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.67.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "newest-cni-251602"
kubeletExtraArgs:
node-ip: 192.168.67.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
feature-gates: "ServerSideApply=true"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
feature-gates: "ServerSideApply=true"
leader-elect: "false"
scheduler:
extraArgs:
feature-gates: "ServerSideApply=true"
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.28.2
networking:
dnsDomain: cluster.local
podSubnet: "10.42.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.42.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1005 20:45:21.251930 941739 kubeadm.go:976] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-251602 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
[Install]
config:
{KubernetesVersion:v1.28.2 ClusterName:newest-cni-251602 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I1005 20:45:21.251984 941739 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
I1005 20:45:21.260656 941739 binaries.go:44] Found k8s binaries, skipping transfer
I1005 20:45:21.260726 941739 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1005 20:45:21.269056 941739 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (415 bytes)
I1005 20:45:21.286089 941739 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1005 20:45:21.302730 941739 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
I1005 20:45:21.319579 941739 ssh_runner.go:195] Run: grep 192.168.67.2 control-plane.minikube.internal$ /etc/hosts
I1005 20:45:21.322925 941739 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1005 20:45:21.333438 941739 certs.go:56] Setting up /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/newest-cni-251602 for IP: 192.168.67.2
I1005 20:45:21.333472 941739 certs.go:190] acquiring lock for shared ca certs: {Name:mka6627fa5c31076c5fa233a6bbda946476bff5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1005 20:45:21.333619 941739 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17363-491115/.minikube/ca.key
I1005 20:45:21.333654 941739 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17363-491115/.minikube/proxy-client-ca.key
I1005 20:45:21.333737 941739 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/newest-cni-251602/client.key
I1005 20:45:21.333791 941739 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/newest-cni-251602/apiserver.key.c7fa3a9e
I1005 20:45:21.333823 941739 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/newest-cni-251602/proxy-client.key
I1005 20:45:21.333912 941739 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/home/jenkins/minikube-integration/17363-491115/.minikube/certs/497926.pem (1338 bytes)
W1005 20:45:21.333938 941739 certs.go:433] ignoring /home/jenkins/minikube-integration/17363-491115/.minikube/certs/home/jenkins/minikube-integration/17363-491115/.minikube/certs/497926_empty.pem, impossibly tiny 0 bytes
I1005 20:45:21.333949 941739 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca-key.pem (1671 bytes)
I1005 20:45:21.333973 941739 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca.pem (1082 bytes)
I1005 20:45:21.334008 941739 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/home/jenkins/minikube-integration/17363-491115/.minikube/certs/cert.pem (1123 bytes)
I1005 20:45:21.334047 941739 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/home/jenkins/minikube-integration/17363-491115/.minikube/certs/key.pem (1679 bytes)
I1005 20:45:21.334102 941739 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-491115/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17363-491115/.minikube/files/etc/ssl/certs/4979262.pem (1708 bytes)
I1005 20:45:21.334741 941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/newest-cni-251602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I1005 20:45:21.357132 941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/newest-cni-251602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1005 20:45:21.379412 941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/newest-cni-251602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1005 20:45:21.402402 941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/newest-cni-251602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1005 20:45:21.425553 941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1005 20:45:21.448572 941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1005 20:45:21.470803 941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1005 20:45:21.492671 941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1005 20:45:21.514617 941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/certs/497926.pem --> /usr/share/ca-certificates/497926.pem (1338 bytes)
I1005 20:45:21.537065 941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/files/etc/ssl/certs/4979262.pem --> /usr/share/ca-certificates/4979262.pem (1708 bytes)
I1005 20:45:21.559657 941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1005 20:45:21.582144 941739 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1005 20:45:21.598672 941739 ssh_runner.go:195] Run: openssl version
I1005 20:45:21.604061 941739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/497926.pem && ln -fs /usr/share/ca-certificates/497926.pem /etc/ssl/certs/497926.pem"
I1005 20:45:21.613694 941739 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/497926.pem
I1005 20:45:21.617122 941739 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 5 20:07 /usr/share/ca-certificates/497926.pem
I1005 20:45:21.617186 941739 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/497926.pem
I1005 20:45:21.623795 941739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/497926.pem /etc/ssl/certs/51391683.0"
I1005 20:45:21.632192 941739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4979262.pem && ln -fs /usr/share/ca-certificates/4979262.pem /etc/ssl/certs/4979262.pem"
I1005 20:45:21.641540 941739 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4979262.pem
I1005 20:45:21.644804 941739 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 5 20:07 /usr/share/ca-certificates/4979262.pem
I1005 20:45:21.644853 941739 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4979262.pem
I1005 20:45:21.651399 941739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4979262.pem /etc/ssl/certs/3ec20f2e.0"
I1005 20:45:21.659734 941739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1005 20:45:21.668779 941739 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1005 20:45:21.672400 941739 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 5 20:03 /usr/share/ca-certificates/minikubeCA.pem
I1005 20:45:21.672473 941739 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1005 20:45:21.678971 941739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1005 20:45:21.688374 941739 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
I1005 20:45:21.691701 941739 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I1005 20:45:21.698446 941739 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I1005 20:45:21.704585 941739 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I1005 20:45:21.710930 941739 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I1005 20:45:21.717269 941739 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I1005 20:45:21.723706 941739 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I1005 20:45:21.730244 941739 kubeadm.go:404] StartCluster: {Name:newest-cni-251602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-251602 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mul
tiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
I1005 20:45:21.730390 941739 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1005 20:45:21.749238 941739 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1005 20:45:21.757704 941739 kubeadm.go:419] found existing configuration files, will attempt cluster restart
I1005 20:45:21.757777 941739 kubeadm.go:636] restartCluster start
I1005 20:45:21.757833 941739 ssh_runner.go:195] Run: sudo test -d /data/minikube
I1005 20:45:21.766002 941739 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I1005 20:45:21.766568 941739 kubeconfig.go:135] verify returned: extract IP: "newest-cni-251602" does not appear in /home/jenkins/minikube-integration/17363-491115/kubeconfig
I1005 20:45:21.766798 941739 kubeconfig.go:146] "newest-cni-251602" context is missing from /home/jenkins/minikube-integration/17363-491115/kubeconfig - will repair!
I1005 20:45:21.767178 941739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-491115/kubeconfig: {Name:mkd6618cb8d42fbccf8ec108c3891f3e690ff249 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1005 20:45:21.768584 941739 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I1005 20:45:21.777081 941739 api_server.go:166] Checking apiserver status ...
I1005 20:45:21.777142 941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1005 20:45:21.786498 941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1005 20:45:21.786517 941739 api_server.go:166] Checking apiserver status ...
I1005 20:45:21.786555 941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1005 20:45:21.795849 941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1005 20:45:22.296543 941739 api_server.go:166] Checking apiserver status ...
I1005 20:45:22.296643 941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1005 20:45:22.307113 941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1005 20:45:22.796806 941739 api_server.go:166] Checking apiserver status ...
I1005 20:45:22.796920 941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1005 20:45:22.807658 941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1005 20:45:23.296196 941739 api_server.go:166] Checking apiserver status ...
I1005 20:45:23.296307 941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1005 20:45:23.307063 941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1005 20:45:23.796660 941739 api_server.go:166] Checking apiserver status ...
I1005 20:45:23.796750 941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1005 20:45:23.807326 941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1005 20:45:24.296919 941739 api_server.go:166] Checking apiserver status ...
I1005 20:45:24.297003 941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1005 20:45:24.307595 941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1005 20:45:24.796497 941739 api_server.go:166] Checking apiserver status ...
I1005 20:45:24.796585 941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1005 20:45:24.807169 941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1005 20:45:25.296770 941739 api_server.go:166] Checking apiserver status ...
I1005 20:45:25.296888 941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1005 20:45:25.307546 941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1005 20:45:25.796061 941739 api_server.go:166] Checking apiserver status ...
I1005 20:45:25.796166 941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1005 20:45:25.806783 941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1005 20:45:26.296330 941739 api_server.go:166] Checking apiserver status ...
I1005 20:45:26.296433 941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1005 20:45:26.307074 941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1005 20:45:26.796470 941739 api_server.go:166] Checking apiserver status ...
I1005 20:45:26.796577 941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1005 20:45:26.806786 941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1005 20:45:27.296331 941739 api_server.go:166] Checking apiserver status ...
I1005 20:45:27.296415 941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1005 20:45:27.306522 941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1005 20:45:27.796815 941739 api_server.go:166] Checking apiserver status ...
I1005 20:45:27.796927 941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1005 20:45:27.807056 941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1005 20:45:28.296676 941739 api_server.go:166] Checking apiserver status ...
I1005 20:45:28.296772 941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1005 20:45:28.307093 941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1005 20:45:28.796685 941739 api_server.go:166] Checking apiserver status ...
I1005 20:45:28.796792 941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1005 20:45:28.807035 941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1005 20:45:29.296656 941739 api_server.go:166] Checking apiserver status ...
I1005 20:45:29.296766 941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1005 20:45:29.306878 941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1005 20:45:29.796676 941739 api_server.go:166] Checking apiserver status ...
I1005 20:45:29.796758 941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1005 20:45:29.807141 941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1005 20:45:30.296755 941739 api_server.go:166] Checking apiserver status ...
I1005 20:45:30.296850 941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1005 20:45:30.306907 941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1005 20:45:30.796266 941739 api_server.go:166] Checking apiserver status ...
I1005 20:45:30.796377 941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1005 20:45:30.806636 941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1005 20:45:31.296136 941739 api_server.go:166] Checking apiserver status ...
I1005 20:45:31.296248 941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1005 20:45:31.306343 941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1005 20:45:31.778147 941739 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
I1005 20:45:31.778197 941739 kubeadm.go:1128] stopping kube-system containers ...
I1005 20:45:31.778276 941739 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1005 20:45:31.799139 941739 docker.go:463] Stopping containers: [edbeda11d2dc 9f2cb55357e2 f6bfaab5a6ac 67260f7c09c8 fd1feebd6b30 dd66b2b22702 01b6b78a55a3 b4382c5ea59f 12ffa1278374 5918e2e006de 5959b2ce7826 84e04d4b3dda 2fee3456f3f2 4f3db88655d7]
I1005 20:45:31.799221 941739 ssh_runner.go:195] Run: docker stop edbeda11d2dc 9f2cb55357e2 f6bfaab5a6ac 67260f7c09c8 fd1feebd6b30 dd66b2b22702 01b6b78a55a3 b4382c5ea59f 12ffa1278374 5918e2e006de 5959b2ce7826 84e04d4b3dda 2fee3456f3f2 4f3db88655d7
I1005 20:45:31.819269 941739 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I1005 20:45:31.831589 941739 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1005 20:45:31.840562 941739 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5643 Oct 5 20:44 /etc/kubernetes/admin.conf
-rw------- 1 root root 5652 Oct 5 20:44 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 2007 Oct 5 20:44 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5604 Oct 5 20:44 /etc/kubernetes/scheduler.conf
I1005 20:45:31.840635 941739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1005 20:45:31.848959 941739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1005 20:45:31.857521 941739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1005 20:45:31.865912 941739 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I1005 20:45:31.865992 941739 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1005 20:45:31.874539 941739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1005 20:45:31.882971 941739 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I1005 20:45:31.883036 941739 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1005 20:45:31.891165 941739 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1005 20:45:31.899809 941739 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I1005 20:45:31.899844 941739 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I1005 20:45:31.950458 941739 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I1005 20:45:32.439655 941739 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I1005 20:45:32.588235 941739 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I1005 20:45:32.644120 941739 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I1005 20:45:32.740951 941739 api_server.go:52] waiting for apiserver process to appear ...
I1005 20:45:32.741029 941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1005 20:45:32.753615 941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1005 20:45:33.330126 941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1005 20:45:33.829788 941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1005 20:45:33.846461 941739 api_server.go:72] duration metric: took 1.105507442s to wait for apiserver process to appear ...
I1005 20:45:33.846542 941739 api_server.go:88] waiting for apiserver healthz status ...
I1005 20:45:33.846578 941739 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I1005 20:45:33.846977 941739 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
I1005 20:45:33.847055 941739 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I1005 20:45:33.847357 941739 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
I1005 20:45:34.348075 941739 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I1005 20:45:36.627973 941739 api_server.go:279] https://192.168.67.2:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W1005 20:45:36.628063 941739 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I1005 20:45:36.628087 941739 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I1005 20:45:36.740856 941739 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[-]poststarthook/start-apiextensions-controllers failed: reason withheld
[-]poststarthook/crd-informer-synced failed: reason withheld
[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
[+]poststarthook/start-system-namespaces-controller ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
W1005 20:45:36.740956 941739 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[-]poststarthook/start-apiextensions-controllers failed: reason withheld
[-]poststarthook/crd-informer-synced failed: reason withheld
[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
[+]poststarthook/start-system-namespaces-controller ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
I1005 20:45:36.848296 941739 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I1005 20:45:36.853601 941739 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
W1005 20:45:36.853628 941739 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
I1005 20:45:37.348237 941739 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I1005 20:45:37.352593 941739 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
W1005 20:45:37.352618 941739 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
I1005 20:45:37.847923 941739 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I1005 20:45:37.852873 941739 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
W1005 20:45:37.852902 941739 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
I1005 20:45:38.348152 941739 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I1005 20:45:38.354442 941739 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
ok
I1005 20:45:38.363755 941739 api_server.go:141] control plane version: v1.28.2
I1005 20:45:38.363785 941739 api_server.go:131] duration metric: took 4.517223524s to wait for apiserver health ...
I1005 20:45:38.363796 941739 cni.go:84] Creating CNI manager for ""
I1005 20:45:38.363807 941739 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1005 20:45:38.365566 941739 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I1005 20:45:38.366945 941739 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I1005 20:45:38.375605 941739 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
I1005 20:45:38.418968 941739 system_pods.go:43] waiting for kube-system pods to appear ...
I1005 20:45:38.430492 941739 system_pods.go:59] 8 kube-system pods found
I1005 20:45:38.430531 941739 system_pods.go:61] "coredns-5dd5756b68-bm584" [0aa18475-85e2-44fd-b2f3-bea8e676ae2e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1005 20:45:38.430541 941739 system_pods.go:61] "etcd-newest-cni-251602" [34417493-e814-4f29-b447-2863b3cfcf27] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1005 20:45:38.430550 941739 system_pods.go:61] "kube-apiserver-newest-cni-251602" [ea09c35f-5b0a-4a6f-b11c-8a5516fbd861] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1005 20:45:38.430560 941739 system_pods.go:61] "kube-controller-manager-newest-cni-251602" [4636875d-c7bf-4080-a173-2ea829bdbbc3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1005 20:45:38.430571 941739 system_pods.go:61] "kube-proxy-vtq52" [c6349e67-7d8d-4cca-9b07-1eb70a41bb60] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1005 20:45:38.430603 941739 system_pods.go:61] "kube-scheduler-newest-cni-251602" [c3320fb3-4468-4f4c-ac6e-3d3aa7c4af0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1005 20:45:38.430617 941739 system_pods.go:61] "metrics-server-57f55c9bc5-75jt5" [6455e407-161e-4abe-94a4-8fb5968789b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1005 20:45:38.430631 941739 system_pods.go:61] "storage-provisioner" [50c99270-263a-466f-ae78-8da1c3fe7545] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1005 20:45:38.430641 941739 system_pods.go:74] duration metric: took 11.652857ms to wait for pod list to return data ...
I1005 20:45:38.430649 941739 node_conditions.go:102] verifying NodePressure condition ...
I1005 20:45:38.435489 941739 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I1005 20:45:38.435522 941739 node_conditions.go:123] node cpu capacity is 8
I1005 20:45:38.435538 941739 node_conditions.go:105] duration metric: took 4.879676ms to run NodePressure ...
I1005 20:45:38.435565 941739 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I1005 20:45:38.709413 941739 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1005 20:45:38.718207 941739 ops.go:34] apiserver oom_adj: -16
I1005 20:45:38.718235 941739 kubeadm.go:640] restartCluster took 16.960444278s
I1005 20:45:38.718247 941739 kubeadm.go:406] StartCluster complete in 16.988017482s
I1005 20:45:38.718274 941739 settings.go:142] acquiring lock: {Name:mk74c5e95d8c9fcaf06097e6d304129504752ec7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1005 20:45:38.718351 941739 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/17363-491115/kubeconfig
I1005 20:45:38.719220 941739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-491115/kubeconfig: {Name:mkd6618cb8d42fbccf8ec108c3891f3e690ff249 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1005 20:45:38.719473 941739 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1005 20:45:38.719630 941739 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
I1005 20:45:38.719714 941739 config.go:182] Loaded profile config "newest-cni-251602": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1005 20:45:38.719720 941739 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-251602"
I1005 20:45:38.719738 941739 addons.go:231] Setting addon storage-provisioner=true in "newest-cni-251602"
W1005 20:45:38.719746 941739 addons.go:240] addon storage-provisioner should already be in state true
I1005 20:45:38.719745 941739 addons.go:69] Setting metrics-server=true in profile "newest-cni-251602"
I1005 20:45:38.719747 941739 addons.go:69] Setting default-storageclass=true in profile "newest-cni-251602"
I1005 20:45:38.719763 941739 addons.go:231] Setting addon metrics-server=true in "newest-cni-251602"
W1005 20:45:38.719772 941739 addons.go:240] addon metrics-server should already be in state true
I1005 20:45:38.719786 941739 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-251602"
I1005 20:45:38.719799 941739 host.go:66] Checking if "newest-cni-251602" exists ...
I1005 20:45:38.719813 941739 host.go:66] Checking if "newest-cni-251602" exists ...
I1005 20:45:38.719800 941739 addons.go:69] Setting dashboard=true in profile "newest-cni-251602"
I1005 20:45:38.719834 941739 addons.go:231] Setting addon dashboard=true in "newest-cni-251602"
W1005 20:45:38.719843 941739 addons.go:240] addon dashboard should already be in state true
I1005 20:45:38.719903 941739 host.go:66] Checking if "newest-cni-251602" exists ...
I1005 20:45:38.720124 941739 cli_runner.go:164] Run: docker container inspect newest-cni-251602 --format={{.State.Status}}
I1005 20:45:38.720279 941739 cli_runner.go:164] Run: docker container inspect newest-cni-251602 --format={{.State.Status}}
I1005 20:45:38.720282 941739 cli_runner.go:164] Run: docker container inspect newest-cni-251602 --format={{.State.Status}}
I1005 20:45:38.720344 941739 cli_runner.go:164] Run: docker container inspect newest-cni-251602 --format={{.State.Status}}
I1005 20:45:38.723756 941739 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-251602" context rescaled to 1 replicas
I1005 20:45:38.723804 941739 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I1005 20:45:38.727049 941739 out.go:177] * Verifying Kubernetes components...
I1005 20:45:38.728767 941739 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1005 20:45:38.743950 941739 addons.go:231] Setting addon default-storageclass=true in "newest-cni-251602"
W1005 20:45:38.744161 941739 addons.go:240] addon default-storageclass should already be in state true
I1005 20:45:38.744212 941739 host.go:66] Checking if "newest-cni-251602" exists ...
I1005 20:45:38.744748 941739 cli_runner.go:164] Run: docker container inspect newest-cni-251602 --format={{.State.Status}}
I1005 20:45:38.761605 941739 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I1005 20:45:38.762994 941739 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I1005 20:45:38.764361 941739 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1005 20:45:38.762962 941739 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1005 20:45:38.766924 941739 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1005 20:45:38.765746 941739 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1005 20:45:38.765763 941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1005 20:45:38.768302 941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
I1005 20:45:38.768331 941739 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1005 20:45:38.768349 941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1005 20:45:38.768396 941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
I1005 20:45:38.768481 941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1005 20:45:38.768528 941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
I1005 20:45:38.771651 941739 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
I1005 20:45:38.771678 941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1005 20:45:38.771838 941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
I1005 20:45:38.792082 941739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/newest-cni-251602/id_rsa Username:docker}
I1005 20:45:38.798427 941739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/newest-cni-251602/id_rsa Username:docker}
I1005 20:45:38.803117 941739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/newest-cni-251602/id_rsa Username:docker}
I1005 20:45:38.806797 941739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/newest-cni-251602/id_rsa Username:docker}
I1005 20:45:38.847194 941739 start.go:896] CoreDNS already contains "host.minikube.internal" host record, skipping...
I1005 20:45:38.847278 941739 api_server.go:52] waiting for apiserver process to appear ...
I1005 20:45:38.847344 941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1005 20:45:38.926982 941739 api_server.go:72] duration metric: took 203.134329ms to wait for apiserver process to appear ...
I1005 20:45:38.927013 941739 api_server.go:88] waiting for apiserver healthz status ...
I1005 20:45:38.927033 941739 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I1005 20:45:38.931963 941739 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
ok
I1005 20:45:38.933196 941739 api_server.go:141] control plane version: v1.28.2
I1005 20:45:38.933257 941739 api_server.go:131] duration metric: took 6.235518ms to wait for apiserver health ...
I1005 20:45:38.933268 941739 system_pods.go:43] waiting for kube-system pods to appear ...
I1005 20:45:38.938837 941739 system_pods.go:59] 8 kube-system pods found
I1005 20:45:38.938869 941739 system_pods.go:61] "coredns-5dd5756b68-bm584" [0aa18475-85e2-44fd-b2f3-bea8e676ae2e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1005 20:45:38.938882 941739 system_pods.go:61] "etcd-newest-cni-251602" [34417493-e814-4f29-b447-2863b3cfcf27] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1005 20:45:38.938893 941739 system_pods.go:61] "kube-apiserver-newest-cni-251602" [ea09c35f-5b0a-4a6f-b11c-8a5516fbd861] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1005 20:45:38.938906 941739 system_pods.go:61] "kube-controller-manager-newest-cni-251602" [4636875d-c7bf-4080-a173-2ea829bdbbc3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1005 20:45:38.938913 941739 system_pods.go:61] "kube-proxy-vtq52" [c6349e67-7d8d-4cca-9b07-1eb70a41bb60] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1005 20:45:38.938919 941739 system_pods.go:61] "kube-scheduler-newest-cni-251602" [c3320fb3-4468-4f4c-ac6e-3d3aa7c4af0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1005 20:45:38.938932 941739 system_pods.go:61] "metrics-server-57f55c9bc5-75jt5" [6455e407-161e-4abe-94a4-8fb5968789b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1005 20:45:38.938943 941739 system_pods.go:61] "storage-provisioner" [50c99270-263a-466f-ae78-8da1c3fe7545] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1005 20:45:38.938955 941739 system_pods.go:74] duration metric: took 5.679606ms to wait for pod list to return data ...
I1005 20:45:38.938967 941739 default_sa.go:34] waiting for default service account to be created ...
I1005 20:45:38.941596 941739 default_sa.go:45] found service account: "default"
I1005 20:45:38.941625 941739 default_sa.go:55] duration metric: took 2.647466ms for default service account to be created ...
I1005 20:45:38.941638 941739 kubeadm.go:581] duration metric: took 217.801105ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
I1005 20:45:38.941657 941739 node_conditions.go:102] verifying NodePressure condition ...
I1005 20:45:38.944359 941739 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I1005 20:45:38.944385 941739 node_conditions.go:123] node cpu capacity is 8
I1005 20:45:38.944399 941739 node_conditions.go:105] duration metric: took 2.735534ms to run NodePressure ...
I1005 20:45:38.944414 941739 start.go:228] waiting for startup goroutines ...
I1005 20:45:39.031121 941739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1005 20:45:39.031835 941739 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1005 20:45:39.031864 941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1005 20:45:39.037663 941739 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1005 20:45:39.037689 941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I1005 20:45:39.038028 941739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1005 20:45:39.052055 941739 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1005 20:45:39.052084 941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1005 20:45:39.122929 941739 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1005 20:45:39.122960 941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1005 20:45:39.135708 941739 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1005 20:45:39.135738 941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1005 20:45:39.148797 941739 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1005 20:45:39.148828 941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1005 20:45:39.233123 941739 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1005 20:45:39.233156 941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I1005 20:45:39.246996 941739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1005 20:45:39.325634 941739 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
I1005 20:45:39.325672 941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1005 20:45:39.348115 941739 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1005 20:45:39.348137 941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1005 20:45:39.436685 941739 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1005 20:45:39.436712 941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1005 20:45:39.528259 941739 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1005 20:45:39.528287 941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1005 20:45:39.547672 941739 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1005 20:45:39.547706 941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1005 20:45:39.565975 941739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1005 20:45:40.443947 941739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.412776284s)
I1005 20:45:40.444070 941739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.406004214s)
I1005 20:45:40.571364 941739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.324316067s)
I1005 20:45:40.571417 941739 addons.go:467] Verifying addon metrics-server=true in "newest-cni-251602"
I1005 20:45:40.917851 941739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.351821533s)
I1005 20:45:40.919845 941739 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p newest-cni-251602 addons enable metrics-server
I1005 20:45:40.921418 941739 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
I1005 20:45:40.922771 941739 addons.go:502] enable addons completed in 2.203154287s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
I1005 20:45:40.922805 941739 start.go:233] waiting for cluster config update ...
I1005 20:45:40.922816 941739 start.go:242] writing updated cluster config ...
I1005 20:45:40.923059 941739 ssh_runner.go:195] Run: rm -f paused
I1005 20:45:40.970862 941739 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
I1005 20:45:40.972904 941739 out.go:177] * Done! kubectl is now configured to use "newest-cni-251602" cluster and "default" namespace by default
I1005 20:46:27.657681 848852 system_pods.go:86] 7 kube-system pods found
I1005 20:46:27.657726 848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1005 20:46:27.657736 848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
I1005 20:46:27.657741 848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
I1005 20:46:27.657747 848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
I1005 20:46:27.657753 848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1005 20:46:27.657758 848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
I1005 20:46:27.657766 848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1005 20:46:27.660128 848852 out.go:177]
W1005 20:46:27.662010 848852 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns, kube-proxy
W1005 20:46:27.662024 848852 out.go:239] *
W1005 20:46:27.662801 848852 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1005 20:46:27.665143 848852 out.go:177]
*
* ==> Docker <==
* Oct 05 20:37:00 old-k8s-version-330869 dockerd[1156]: time="2023-10-05T20:37:00.741750210Z" level=info msg="Loading containers: start."
Oct 05 20:37:00 old-k8s-version-330869 dockerd[1156]: time="2023-10-05T20:37:00.835828175Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Oct 05 20:37:00 old-k8s-version-330869 dockerd[1156]: time="2023-10-05T20:37:00.873800369Z" level=info msg="Loading containers: done."
Oct 05 20:37:00 old-k8s-version-330869 dockerd[1156]: time="2023-10-05T20:37:00.883733188Z" level=info msg="Docker daemon" commit=1a79695 graphdriver=overlay2 version=24.0.6
Oct 05 20:37:00 old-k8s-version-330869 dockerd[1156]: time="2023-10-05T20:37:00.883797436Z" level=info msg="Daemon has completed initialization"
Oct 05 20:37:00 old-k8s-version-330869 dockerd[1156]: time="2023-10-05T20:37:00.908207956Z" level=info msg="API listen on /var/run/docker.sock"
Oct 05 20:37:00 old-k8s-version-330869 dockerd[1156]: time="2023-10-05T20:37:00.908224826Z" level=info msg="API listen on [::]:2376"
Oct 05 20:37:00 old-k8s-version-330869 systemd[1]: Started Docker Application Container Engine.
Oct 05 20:37:04 old-k8s-version-330869 systemd[1]: Stopping Docker Application Container Engine...
Oct 05 20:37:04 old-k8s-version-330869 dockerd[1156]: time="2023-10-05T20:37:04.385384032Z" level=info msg="Processing signal 'terminated'"
Oct 05 20:37:04 old-k8s-version-330869 dockerd[1156]: time="2023-10-05T20:37:04.387134388Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Oct 05 20:37:04 old-k8s-version-330869 dockerd[1156]: time="2023-10-05T20:37:04.388068483Z" level=info msg="Daemon shutdown complete"
Oct 05 20:37:04 old-k8s-version-330869 systemd[1]: docker.service: Deactivated successfully.
Oct 05 20:37:04 old-k8s-version-330869 systemd[1]: Stopped Docker Application Container Engine.
Oct 05 20:37:04 old-k8s-version-330869 systemd[1]: Starting Docker Application Container Engine...
Oct 05 20:37:04 old-k8s-version-330869 dockerd[1366]: time="2023-10-05T20:37:04.451611505Z" level=info msg="Starting up"
Oct 05 20:37:04 old-k8s-version-330869 dockerd[1366]: time="2023-10-05T20:37:04.461647092Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Oct 05 20:37:06 old-k8s-version-330869 dockerd[1366]: time="2023-10-05T20:37:06.744132135Z" level=info msg="Loading containers: start."
Oct 05 20:37:06 old-k8s-version-330869 dockerd[1366]: time="2023-10-05T20:37:06.839612411Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Oct 05 20:37:07 old-k8s-version-330869 dockerd[1366]: time="2023-10-05T20:37:07.001066181Z" level=info msg="Loading containers: done."
Oct 05 20:37:07 old-k8s-version-330869 dockerd[1366]: time="2023-10-05T20:37:07.016241859Z" level=info msg="Docker daemon" commit=1a79695 graphdriver=overlay2 version=24.0.6
Oct 05 20:37:07 old-k8s-version-330869 dockerd[1366]: time="2023-10-05T20:37:07.016300931Z" level=info msg="Daemon has completed initialization"
Oct 05 20:37:07 old-k8s-version-330869 dockerd[1366]: time="2023-10-05T20:37:07.039742052Z" level=info msg="API listen on /var/run/docker.sock"
Oct 05 20:37:07 old-k8s-version-330869 dockerd[1366]: time="2023-10-05T20:37:07.039779377Z" level=info msg="API listen on [::]:2376"
Oct 05 20:37:07 old-k8s-version-330869 systemd[1]: Started Docker Application Container Engine.
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
56472dff5f81f 6e38f40d628db 8 minutes ago Running storage-provisioner 0 64075514dc163 storage-provisioner
9f0be33584868 bf261d1579144 8 minutes ago Running coredns 0 2e7135e437f0c coredns-5644d7b6d9-k2f47
cef84f5b51c49 c21b0c7400f98 8 minutes ago Running kube-proxy 0 a228f4c03cdba kube-proxy-n9cwb
530e42b9f6c77 b2756210eeabf 9 minutes ago Running etcd 0 7ab4e42c79a68 etcd-old-k8s-version-330869
6c66019a6e010 06a629a7e51cd 9 minutes ago Running kube-controller-manager 0 e87f561b15eaf kube-controller-manager-old-k8s-version-330869
a576da8318f84 301ddc62b80b1 9 minutes ago Running kube-scheduler 0 84631805dc0e9 kube-scheduler-old-k8s-version-330869
91420fd2d357f b305571ca60a5 9 minutes ago Running kube-apiserver 0 a2a65ce6717dd kube-apiserver-old-k8s-version-330869
*
* ==> coredns [9f0be3358486] <==
* .:53
2023-10-05T20:37:40.506Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
2023-10-05T20:37:40.507Z [INFO] CoreDNS-1.6.2
2023-10-05T20:37:40.507Z [INFO] linux/amd64, go1.12.8, 795a3eb
CoreDNS-1.6.2
linux/amd64, go1.12.8, 795a3eb
*
* ==> describe nodes <==
* Name: old-k8s-version-330869
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=old-k8s-version-330869
kubernetes.io/os=linux
minikube.k8s.io/commit=300d55cee86053f5b4c7a654fc8e7b9d3c030d53
minikube.k8s.io/name=old-k8s-version-330869
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_10_05T20_37_23_0700
minikube.k8s.io/version=v1.31.2
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Thu, 05 Oct 2023 20:37:18 +0000
Taints: node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Thu, 05 Oct 2023 20:46:20 +0000 Thu, 05 Oct 2023 20:37:13 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 05 Oct 2023 20:46:20 +0000 Thu, 05 Oct 2023 20:37:13 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 05 Oct 2023 20:46:20 +0000 Thu, 05 Oct 2023 20:37:13 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Thu, 05 Oct 2023 20:46:20 +0000 Thu, 05 Oct 2023 20:40:49 +0000 KubeletNotReady PLEG is not healthy: pleg was last seen active 8m40.67992945s ago; threshold is 3m0s
Addresses:
InternalIP: 192.168.85.2
Hostname: old-k8s-version-330869
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859420Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859420Ki
pods: 110
System Info:
Machine ID: da3d4e78336e4de3801cc5f1121e363a
System UUID: fb98631f-d977-49f6-8d13-47582452d2b5
Boot ID: 1c650140-d8f3-4a50-ac83-e0e6baf94598
Kernel Version: 5.15.0-1044-gcp
OS Image: Ubuntu 22.04.3 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://24.0.6
Kubelet Version: v1.16.0
Kube-Proxy Version: v1.16.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-5644d7b6d9-k2f47 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 8m50s
kube-system etcd-old-k8s-version-330869 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m48s
kube-system kube-apiserver-old-k8s-version-330869 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m5s
kube-system kube-controller-manager-old-k8s-version-330869 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m47s
kube-system kube-proxy-n9cwb 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m50s
kube-system kube-scheduler-old-k8s-version-330869 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m47s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m48s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 650m (8%!)(MISSING) 0 (0%!)(MISSING)
memory 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 9m16s (x8 over 9m16s) kubelet, old-k8s-version-330869 Node old-k8s-version-330869 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 9m16s (x8 over 9m16s) kubelet, old-k8s-version-330869 Node old-k8s-version-330869 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 9m16s (x7 over 9m16s) kubelet, old-k8s-version-330869 Node old-k8s-version-330869 status is now: NodeHasSufficientPID
Normal Starting 8m48s kube-proxy, old-k8s-version-330869 Starting kube-proxy.
*
* ==> dmesg <==
* [ +0.000010] ll header: 00000000: ff ff ff ff ff ff 0e ee c2 a6 29 ac 08 06
[Oct 5 20:39] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev bridge
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 fb 5f c9 9e d7 08 06
[ +0.715332] IPv4: martian source 10.244.0.1 from 10.244.0.7, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e 50 38 95 e7 63 08 06
[ +8.065920] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 06 69 10 43 1f 0b 08 06
[ +16.180606] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 6a 13 59 d9 da 08 06
[Oct 5 20:43] IPv4: martian source 10.244.0.1 from 10.244.0.8, on dev eth0
[ +0.000013] ll header: 00000000: ff ff ff ff ff ff ae 52 50 a9 6f 53 08 06
[Oct 5 20:44] IPv4: martian source 10.244.0.1 from 10.244.0.10, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 46 f6 d1 58 d2 08 06
[ +19.224580] IPv4: martian source 10.244.0.1 from 10.244.0.7, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 3e 9e 9c 80 d0 43 08 06
[ +8.732079] IPv4: martian source 10.244.0.1 from 10.244.0.9, on dev eth0
[ +0.000010] ll header: 00000000: ff ff ff ff ff ff 96 3b 8d 2f b2 6f 08 06
[ +1.563207] IPv4: martian source 10.244.0.1 from 10.244.0.7, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 1a 9a 54 1a fc 08 06
[ +5.814222] IPv4: martian source 10.244.0.1 from 10.244.0.9, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff ea e4 45 a0 bd b2 08 06
[Oct 5 20:45] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000010] ll header: 00000000: ff ff ff ff ff ff f2 32 f7 4c 9e 13 08 06
[ +35.890083] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff ba 7a 81 76 84 0a 08 06
*
* ==> etcd [530e42b9f6c7] <==
* 2023-10-05 20:37:13.535147 I | raft: 9f0758e1c58a86ed became follower at term 0
2023-10-05 20:37:13.535157 I | raft: newRaft 9f0758e1c58a86ed [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2023-10-05 20:37:13.535162 I | raft: 9f0758e1c58a86ed became follower at term 1
2023-10-05 20:37:13.540464 W | auth: simple token is not cryptographically signed
2023-10-05 20:37:13.543597 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
2023-10-05 20:37:13.544562 I | etcdserver: 9f0758e1c58a86ed as single-node; fast-forwarding 9 ticks (election ticks 10)
2023-10-05 20:37:13.544945 I | etcdserver/membership: added member 9f0758e1c58a86ed [https://192.168.85.2:2380] to cluster 68eaea490fab4e05
2023-10-05 20:37:13.546416 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2023-10-05 20:37:13.546557 I | embed: listening for metrics on http://192.168.85.2:2381
2023-10-05 20:37:13.546670 I | embed: listening for metrics on http://127.0.0.1:2381
2023-10-05 20:37:14.535543 I | raft: 9f0758e1c58a86ed is starting a new election at term 1
2023-10-05 20:37:14.535587 I | raft: 9f0758e1c58a86ed became candidate at term 2
2023-10-05 20:37:14.535619 I | raft: 9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2
2023-10-05 20:37:14.535634 I | raft: 9f0758e1c58a86ed became leader at term 2
2023-10-05 20:37:14.535644 I | raft: raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2
2023-10-05 20:37:14.535840 I | etcdserver: setting up the initial cluster version to 3.3
2023-10-05 20:37:14.536888 N | etcdserver/membership: set the initial cluster version to 3.3
2023-10-05 20:37:14.536932 I | etcdserver/api: enabled capabilities for version 3.3
2023-10-05 20:37:14.536948 I | embed: ready to serve client requests
2023-10-05 20:37:14.536984 I | etcdserver: published {Name:old-k8s-version-330869 ClientURLs:[https://192.168.85.2:2379]} to cluster 68eaea490fab4e05
2023-10-05 20:37:14.537012 I | embed: ready to serve client requests
2023-10-05 20:37:14.538573 I | embed: serving client requests on 192.168.85.2:2379
2023-10-05 20:37:14.538614 I | embed: serving client requests on 127.0.0.1:2379
2023-10-05 20:37:52.212401 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-5644d7b6d9-k2f47\" " with result "range_response_count:1 size:1693" took too long (122.897349ms) to execute
2023-10-05 20:37:52.212479 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:215" took too long (125.865716ms) to execute
*
* ==> kernel <==
* 20:46:28 up 2:28, 0 users, load average: 2.29, 2.74, 2.83
Linux old-k8s-version-330869 5.15.0-1044-gcp #52~20.04.1-Ubuntu SMP Wed Sep 20 16:25:19 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.3 LTS"
*
* ==> kube-apiserver [91420fd2d357] <==
* I1005 20:37:18.648312 1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
E1005 20:37:18.650296 1 controller.go:154] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.85.2, ResourceVersion: 0, AdditionalErrorMsg:
I1005 20:37:18.651420 1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I1005 20:37:18.651503 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I1005 20:37:18.747656 1 cache.go:39] Caches are synced for autoregister controller
I1005 20:37:18.749173 1 shared_informer.go:204] Caches are synced for crd-autoregister
I1005 20:37:18.762082 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1005 20:37:18.762123 1 cache.go:39] Caches are synced for AvailableConditionController controller
I1005 20:37:18.844252 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I1005 20:37:19.647651 1 controller.go:107] OpenAPI AggregationController: Processing item
I1005 20:37:19.647684 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I1005 20:37:19.647699 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1005 20:37:19.651492 1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I1005 20:37:19.654366 1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I1005 20:37:19.654390 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I1005 20:37:21.429015 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1005 20:37:21.708924 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W1005 20:37:22.050878 1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
I1005 20:37:22.051722 1 controller.go:606] quota admission added evaluator for: endpoints
I1005 20:37:22.934960 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I1005 20:37:23.322631 1 controller.go:606] quota admission added evaluator for: deployments.apps
I1005 20:37:23.671803 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I1005 20:37:38.427328 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I1005 20:37:38.453764 1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
I1005 20:37:38.539579 1 controller.go:606] quota admission added evaluator for: replicasets.apps
*
* ==> kube-controller-manager [6c66019a6e01] <==
* I1005 20:37:38.466241 1 shared_informer.go:204] Caches are synced for HPA
I1005 20:37:38.482277 1 shared_informer.go:204] Caches are synced for stateful set
I1005 20:37:38.487462 1 shared_informer.go:204] Caches are synced for ReplicationController
I1005 20:37:38.487721 1 shared_informer.go:204] Caches are synced for GC
I1005 20:37:38.487734 1 shared_informer.go:204] Caches are synced for PVC protection
I1005 20:37:38.487705 1 shared_informer.go:204] Caches are synced for attach detach
I1005 20:37:38.512949 1 shared_informer.go:204] Caches are synced for ReplicaSet
I1005 20:37:38.537588 1 shared_informer.go:204] Caches are synced for deployment
I1005 20:37:38.537985 1 shared_informer.go:204] Caches are synced for resource quota
I1005 20:37:38.542695 1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"59e37402-092c-492a-8a24-0e86b565f6d7", APIVersion:"apps/v1", ResourceVersion:"192", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-5644d7b6d9 to 2
I1005 20:37:38.550402 1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"09eba7cb-b0a0-47ac-8889-0bda35b1e552", APIVersion:"apps/v1", ResourceVersion:"312", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-k2f47
I1005 20:37:38.553758 1 shared_informer.go:204] Caches are synced for expand
I1005 20:37:38.559594 1 shared_informer.go:204] Caches are synced for resource quota
I1005 20:37:38.560551 1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"09eba7cb-b0a0-47ac-8889-0bda35b1e552", APIVersion:"apps/v1", ResourceVersion:"312", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-wmjhd
I1005 20:37:38.588856 1 shared_informer.go:204] Caches are synced for disruption
I1005 20:37:38.588883 1 disruption.go:341] Sending events to api server.
I1005 20:37:38.605013 1 shared_informer.go:204] Caches are synced for persistent volume
I1005 20:37:38.646306 1 shared_informer.go:204] Caches are synced for garbage collector
I1005 20:37:38.681988 1 shared_informer.go:204] Caches are synced for service account
I1005 20:37:38.683218 1 shared_informer.go:204] Caches are synced for namespace
I1005 20:37:38.686440 1 shared_informer.go:204] Caches are synced for garbage collector
I1005 20:37:38.686460 1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1005 20:37:38.941357 1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"59e37402-092c-492a-8a24-0e86b565f6d7", APIVersion:"apps/v1", ResourceVersion:"341", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-5644d7b6d9 to 1
I1005 20:37:38.999527 1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"09eba7cb-b0a0-47ac-8889-0bda35b1e552", APIVersion:"apps/v1", ResourceVersion:"342", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-5644d7b6d9-wmjhd
I1005 20:40:53.441120 1 node_lifecycle_controller.go:1058] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
*
* ==> kube-proxy [cef84f5b51c4] <==
* W1005 20:37:40.040149 1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
I1005 20:37:40.052818 1 node.go:135] Successfully retrieved node IP: 192.168.85.2
I1005 20:37:40.052863 1 server_others.go:149] Using iptables Proxier.
I1005 20:37:40.053425 1 server.go:529] Version: v1.16.0
I1005 20:37:40.053948 1 config.go:131] Starting endpoints config controller
I1005 20:37:40.053980 1 shared_informer.go:197] Waiting for caches to sync for endpoints config
I1005 20:37:40.054066 1 config.go:313] Starting service config controller
I1005 20:37:40.054081 1 shared_informer.go:197] Waiting for caches to sync for service config
I1005 20:37:40.157367 1 shared_informer.go:204] Caches are synced for service config
I1005 20:37:40.224240 1 shared_informer.go:204] Caches are synced for endpoints config
*
* ==> kube-scheduler [a576da8318f8] <==
* W1005 20:37:18.743643 1 authentication.go:79] Authentication is disabled
I1005 20:37:18.743716 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I1005 20:37:18.744158 1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
E1005 20:37:18.842482 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1005 20:37:18.843126 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1005 20:37:18.843177 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1005 20:37:18.843316 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1005 20:37:18.843330 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1005 20:37:18.843386 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1005 20:37:18.843440 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1005 20:37:18.843521 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1005 20:37:18.843846 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1005 20:37:18.844748 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1005 20:37:18.927555 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1005 20:37:19.843756 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1005 20:37:19.844669 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1005 20:37:19.845775 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1005 20:37:19.846714 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1005 20:37:19.850359 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1005 20:37:19.919069 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1005 20:37:19.919980 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1005 20:37:19.927483 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1005 20:37:19.928788 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1005 20:37:19.929881 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1005 20:37:19.931691 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
*
* ==> kubelet <==
* Oct 05 20:44:26 old-k8s-version-330869 kubelet[2004]: I1005 20:44:26.447919 2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 6m46.630387826s ago; threshold is 3m0s
Oct 05 20:44:31 old-k8s-version-330869 kubelet[2004]: I1005 20:44:31.448136 2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 6m51.630595657s ago; threshold is 3m0s
Oct 05 20:44:36 old-k8s-version-330869 kubelet[2004]: I1005 20:44:36.448951 2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 6m56.631397997s ago; threshold is 3m0s
Oct 05 20:44:41 old-k8s-version-330869 kubelet[2004]: I1005 20:44:41.449179 2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m1.631643258s ago; threshold is 3m0s
Oct 05 20:44:46 old-k8s-version-330869 kubelet[2004]: I1005 20:44:46.449442 2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m6.631908491s ago; threshold is 3m0s
Oct 05 20:44:51 old-k8s-version-330869 kubelet[2004]: I1005 20:44:51.449668 2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m11.632131918s ago; threshold is 3m0s
Oct 05 20:44:56 old-k8s-version-330869 kubelet[2004]: I1005 20:44:56.449897 2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m16.63236401s ago; threshold is 3m0s
Oct 05 20:45:01 old-k8s-version-330869 kubelet[2004]: I1005 20:45:01.450139 2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m21.63260354s ago; threshold is 3m0s
Oct 05 20:45:06 old-k8s-version-330869 kubelet[2004]: I1005 20:45:06.450376 2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m26.632844967s ago; threshold is 3m0s
Oct 05 20:45:11 old-k8s-version-330869 kubelet[2004]: I1005 20:45:11.450572 2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m31.63303949s ago; threshold is 3m0s
Oct 05 20:45:16 old-k8s-version-330869 kubelet[2004]: I1005 20:45:16.450802 2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m36.633267162s ago; threshold is 3m0s
Oct 05 20:45:21 old-k8s-version-330869 kubelet[2004]: I1005 20:45:21.451035 2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m41.6335025s ago; threshold is 3m0s
Oct 05 20:45:26 old-k8s-version-330869 kubelet[2004]: I1005 20:45:26.451262 2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m46.633727726s ago; threshold is 3m0s
Oct 05 20:45:31 old-k8s-version-330869 kubelet[2004]: I1005 20:45:31.451519 2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m51.633961825s ago; threshold is 3m0s
Oct 05 20:45:36 old-k8s-version-330869 kubelet[2004]: I1005 20:45:36.451815 2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m56.634284171s ago; threshold is 3m0s
Oct 05 20:45:41 old-k8s-version-330869 kubelet[2004]: I1005 20:45:41.452058 2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m1.634517734s ago; threshold is 3m0s
Oct 05 20:45:46 old-k8s-version-330869 kubelet[2004]: I1005 20:45:46.452320 2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m6.634787132s ago; threshold is 3m0s
Oct 05 20:45:51 old-k8s-version-330869 kubelet[2004]: I1005 20:45:51.452654 2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m11.635118625s ago; threshold is 3m0s
Oct 05 20:45:56 old-k8s-version-330869 kubelet[2004]: I1005 20:45:56.452929 2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m16.635390007s ago; threshold is 3m0s
Oct 05 20:46:01 old-k8s-version-330869 kubelet[2004]: I1005 20:46:01.453256 2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m21.635701631s ago; threshold is 3m0s
Oct 05 20:46:06 old-k8s-version-330869 kubelet[2004]: I1005 20:46:06.453491 2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m26.635958787s ago; threshold is 3m0s
Oct 05 20:46:11 old-k8s-version-330869 kubelet[2004]: I1005 20:46:11.453759 2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m31.636224705s ago; threshold is 3m0s
Oct 05 20:46:16 old-k8s-version-330869 kubelet[2004]: I1005 20:46:16.454007 2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m36.636470752s ago; threshold is 3m0s
Oct 05 20:46:21 old-k8s-version-330869 kubelet[2004]: I1005 20:46:21.454246 2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m41.636710535s ago; threshold is 3m0s
Oct 05 20:46:26 old-k8s-version-330869 kubelet[2004]: I1005 20:46:26.454534 2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m46.636999608s ago; threshold is 3m0s
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-330869 -n old-k8s-version-330869
helpers_test.go:261: (dbg) Run: kubectl --context old-k8s-version-330869 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: coredns-5644d7b6d9-k2f47 kube-proxy-n9cwb storage-provisioner
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context old-k8s-version-330869 describe pod coredns-5644d7b6d9-k2f47 kube-proxy-n9cwb storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-330869 describe pod coredns-5644d7b6d9-k2f47 kube-proxy-n9cwb storage-provisioner: exit status 1 (60.333071ms)
** stderr **
Error from server (NotFound): pods "coredns-5644d7b6d9-k2f47" not found
Error from server (NotFound): pods "kube-proxy-n9cwb" not found
Error from server (NotFound): pods "storage-provisioner" not found
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-330869 describe pod coredns-5644d7b6d9-k2f47 kube-proxy-n9cwb storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (581.47s)