=== RUN TestPreload
preload_test.go:44: (dbg) Run: out/minikube-linux-amd64 start -p test-preload-262726 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.24.4
E0524 19:15:33.044304 79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/functional-420572/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-262726 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.24.4: (1m58.632754164s)
preload_test.go:57: (dbg) Run: out/minikube-linux-amd64 ssh -p test-preload-262726 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:63: (dbg) Run: out/minikube-linux-amd64 stop -p test-preload-262726
E0524 19:17:05.590790 79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/addons-934336/client.crt: no such file or directory
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-262726: (1m31.651243094s)
preload_test.go:71: (dbg) Run: out/minikube-linux-amd64 start -p test-preload-262726 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 --container-runtime=containerd
E0524 19:18:48.773626 79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/ingress-addon-legacy-889011/client.crt: no such file or directory
E0524 19:19:02.538468 79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/addons-934336/client.crt: no such file or directory
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-262726 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 --container-runtime=containerd: (1m23.310501606s)
preload_test.go:80: (dbg) Run: out/minikube-linux-amd64 ssh -p test-preload-262726 -- sudo crictl image ls
preload_test.go:85: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got
-- stdout --
IMAGE TAG IMAGE ID SIZE
docker.io/kindest/kindnetd v20220726-ed811e41 d921cee849482 25.8MB
gcr.io/k8s-minikube/storage-provisioner v5 6e38f40d628db 9.06MB
k8s.gcr.io/coredns/coredns v1.8.6 a4ca41631cc7a 13.6MB
registry.k8s.io/coredns/coredns v1.8.6 a4ca41631cc7a 13.6MB
k8s.gcr.io/etcd 3.5.3-0 aebe758cef4cd 102MB
registry.k8s.io/etcd 3.5.3-0 aebe758cef4cd 102MB
k8s.gcr.io/kube-apiserver v1.24.4 6cab9d1bed1be 33.8MB
registry.k8s.io/kube-apiserver v1.24.4 6cab9d1bed1be 33.8MB
k8s.gcr.io/kube-controller-manager v1.24.4 1f99cb6da9a82 31MB
registry.k8s.io/kube-controller-manager v1.24.4 1f99cb6da9a82 31MB
k8s.gcr.io/kube-proxy v1.24.4 7a53d1e08ef58 39.5MB
registry.k8s.io/kube-proxy v1.24.4 7a53d1e08ef58 39.5MB
k8s.gcr.io/kube-scheduler v1.24.4 03fa22539fc1c 15.5MB
registry.k8s.io/kube-scheduler v1.24.4 03fa22539fc1c 15.5MB
k8s.gcr.io/pause 3.7 221177c6082a8 311kB
registry.k8s.io/pause 3.7 221177c6082a8 311kB
-- /stdout --
panic.go:522: *** TestPreload FAILED at 2023-05-24 19:19:53.203224306 +0000 UTC m=+2610.221011729
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-262726 -n test-preload-262726
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p test-preload-262726 logs -n 25
helpers_test.go:252: TestPreload logs:
-- stdout --
*
* ==> Audit <==
* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| ssh | multinode-053110 ssh -n | multinode-053110 | jenkins | v1.30.1 | 24 May 23 18:57 UTC | 24 May 23 18:57 UTC |
| | multinode-053110-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-053110 ssh -n multinode-053110 sudo cat | multinode-053110 | jenkins | v1.30.1 | 24 May 23 18:57 UTC | 24 May 23 18:57 UTC |
| | /home/docker/cp-test_multinode-053110-m03_multinode-053110.txt | | | | | |
| cp | multinode-053110 cp multinode-053110-m03:/home/docker/cp-test.txt | multinode-053110 | jenkins | v1.30.1 | 24 May 23 18:57 UTC | 24 May 23 18:57 UTC |
| | multinode-053110-m02:/home/docker/cp-test_multinode-053110-m03_multinode-053110-m02.txt | | | | | |
| ssh | multinode-053110 ssh -n | multinode-053110 | jenkins | v1.30.1 | 24 May 23 18:57 UTC | 24 May 23 18:57 UTC |
| | multinode-053110-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-053110 ssh -n multinode-053110-m02 sudo cat | multinode-053110 | jenkins | v1.30.1 | 24 May 23 18:57 UTC | 24 May 23 18:57 UTC |
| | /home/docker/cp-test_multinode-053110-m03_multinode-053110-m02.txt | | | | | |
| node | multinode-053110 node stop m03 | multinode-053110 | jenkins | v1.30.1 | 24 May 23 18:57 UTC | 24 May 23 18:57 UTC |
| node | multinode-053110 node start | multinode-053110 | jenkins | v1.30.1 | 24 May 23 18:57 UTC | 24 May 23 18:58 UTC |
| | m03 --alsologtostderr | | | | | |
| node | list -p multinode-053110 | multinode-053110 | jenkins | v1.30.1 | 24 May 23 18:58 UTC | |
| stop | -p multinode-053110 | multinode-053110 | jenkins | v1.30.1 | 24 May 23 18:58 UTC | 24 May 23 19:01 UTC |
| start | -p multinode-053110 | multinode-053110 | jenkins | v1.30.1 | 24 May 23 19:01 UTC | 24 May 23 19:07 UTC |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| node | list -p multinode-053110 | multinode-053110 | jenkins | v1.30.1 | 24 May 23 19:07 UTC | |
| node | multinode-053110 node delete | multinode-053110 | jenkins | v1.30.1 | 24 May 23 19:07 UTC | 24 May 23 19:07 UTC |
| | m03 | | | | | |
| stop | multinode-053110 stop | multinode-053110 | jenkins | v1.30.1 | 24 May 23 19:07 UTC | 24 May 23 19:10 UTC |
| start | -p multinode-053110 | multinode-053110 | jenkins | v1.30.1 | 24 May 23 19:10 UTC | 24 May 23 19:14 UTC |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| node | list -p multinode-053110 | multinode-053110 | jenkins | v1.30.1 | 24 May 23 19:14 UTC | |
| start | -p multinode-053110-m02 | multinode-053110-m02 | jenkins | v1.30.1 | 24 May 23 19:14 UTC | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p multinode-053110-m03 | multinode-053110-m03 | jenkins | v1.30.1 | 24 May 23 19:14 UTC | 24 May 23 19:14 UTC |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| node | add -p multinode-053110 | multinode-053110 | jenkins | v1.30.1 | 24 May 23 19:14 UTC | |
| delete | -p multinode-053110-m03 | multinode-053110-m03 | jenkins | v1.30.1 | 24 May 23 19:14 UTC | 24 May 23 19:14 UTC |
| delete | -p multinode-053110 | multinode-053110 | jenkins | v1.30.1 | 24 May 23 19:14 UTC | 24 May 23 19:14 UTC |
| start | -p test-preload-262726 | test-preload-262726 | jenkins | v1.30.1 | 24 May 23 19:14 UTC | 24 May 23 19:16 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --preload=false --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.24.4 | | | | | |
| ssh | -p test-preload-262726 | test-preload-262726 | jenkins | v1.30.1 | 24 May 23 19:16 UTC | 24 May 23 19:16 UTC |
| | -- sudo crictl pull | | | | | |
| | gcr.io/k8s-minikube/busybox | | | | | |
| stop | -p test-preload-262726 | test-preload-262726 | jenkins | v1.30.1 | 24 May 23 19:16 UTC | 24 May 23 19:18 UTC |
| start | -p test-preload-262726 | test-preload-262726 | jenkins | v1.30.1 | 24 May 23 19:18 UTC | 24 May 23 19:19 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --wait=true --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | -p test-preload-262726 -- sudo | test-preload-262726 | jenkins | v1.30.1 | 24 May 23 19:19 UTC | 24 May 23 19:19 UTC |
| | crictl image ls | | | | | |
|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/05/24 19:18:29
Running on machine: ubuntu-20-agent-12
Binary: Built with gc go1.20.4 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0524 19:18:29.711769 98716 out.go:296] Setting OutFile to fd 1 ...
I0524 19:18:29.711901 98716 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0524 19:18:29.711912 98716 out.go:309] Setting ErrFile to fd 2...
I0524 19:18:29.711919 98716 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0524 19:18:29.712054 98716 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16573-71939/.minikube/bin
I0524 19:18:29.712522 98716 out.go:303] Setting JSON to false
I0524 19:18:29.713329 98716 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":10845,"bootTime":1684945065,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1034-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0524 19:18:29.713410 98716 start.go:135] virtualization: kvm guest
I0524 19:18:29.716721 98716 out.go:177] * [test-preload-262726] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
I0524 19:18:29.718796 98716 out.go:177] - MINIKUBE_LOCATION=16573
I0524 19:18:29.720581 98716 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0524 19:18:29.718811 98716 notify.go:220] Checking for updates...
I0524 19:18:29.722263 98716 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/16573-71939/kubeconfig
I0524 19:18:29.724088 98716 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/16573-71939/.minikube
I0524 19:18:29.725796 98716 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0524 19:18:29.727298 98716 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0524 19:18:29.729179 98716 config.go:182] Loaded profile config "test-preload-262726": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
I0524 19:18:29.729535 98716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0524 19:18:29.729574 98716 main.go:141] libmachine: Launching plugin server for driver kvm2
I0524 19:18:29.744072 98716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43131
I0524 19:18:29.744441 98716 main.go:141] libmachine: () Calling .GetVersion
I0524 19:18:29.745019 98716 main.go:141] libmachine: Using API Version 1
I0524 19:18:29.745044 98716 main.go:141] libmachine: () Calling .SetConfigRaw
I0524 19:18:29.745374 98716 main.go:141] libmachine: () Calling .GetMachineName
I0524 19:18:29.745569 98716 main.go:141] libmachine: (test-preload-262726) Calling .DriverName
I0524 19:18:29.747440 98716 out.go:177] * Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
I0524 19:18:29.749033 98716 driver.go:375] Setting default libvirt URI to qemu:///system
I0524 19:18:29.749308 98716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0524 19:18:29.749362 98716 main.go:141] libmachine: Launching plugin server for driver kvm2
I0524 19:18:29.763201 98716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44745
I0524 19:18:29.763492 98716 main.go:141] libmachine: () Calling .GetVersion
I0524 19:18:29.763902 98716 main.go:141] libmachine: Using API Version 1
I0524 19:18:29.763923 98716 main.go:141] libmachine: () Calling .SetConfigRaw
I0524 19:18:29.764201 98716 main.go:141] libmachine: () Calling .GetMachineName
I0524 19:18:29.764358 98716 main.go:141] libmachine: (test-preload-262726) Calling .DriverName
I0524 19:18:29.797067 98716 out.go:177] * Using the kvm2 driver based on existing profile
I0524 19:18:29.798689 98716 start.go:295] selected driver: kvm2
I0524 19:18:29.798704 98716 start.go:870] validating driver "kvm2" against &{Name:test-preload-262726 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-262726 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mini
kube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0524 19:18:29.798823 98716 start.go:881] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0524 19:18:29.799422 98716 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0524 19:18:29.799497 98716 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16573-71939/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0524 19:18:29.812802 98716 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.30.1
I0524 19:18:29.813107 98716 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0524 19:18:29.813140 98716 cni.go:84] Creating CNI manager for ""
I0524 19:18:29.813151 98716 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
I0524 19:18:29.813164 98716 start_flags.go:319] config:
{Name:test-preload-262726 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-262726 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0524 19:18:29.813271 98716 iso.go:125] acquiring lock: {Name:mk070acfedcbbaf2c11bfabff12ffb52c449689f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0524 19:18:29.815369 98716 out.go:177] * Starting control plane node test-preload-262726 in cluster test-preload-262726
I0524 19:18:29.816992 98716 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
I0524 19:18:29.842970 98716 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4
I0524 19:18:29.843000 98716 cache.go:57] Caching tarball of preloaded images
I0524 19:18:29.843117 98716 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
I0524 19:18:29.845010 98716 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
I0524 19:18:29.846518 98716 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
I0524 19:18:29.883002 98716 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4?checksum=md5:41d292e9d8b8bb8fdf3bc94dc3c43bf0 -> /home/jenkins/minikube-integration/16573-71939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4
I0524 19:18:34.602043 98716 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
I0524 19:18:34.602125 98716 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16573-71939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
I0524 19:18:35.465495 98716 cache.go:60] Finished verifying existence of preloaded tar for v1.24.4 on containerd
I0524 19:18:35.465636 98716 profile.go:148] Saving config to /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/test-preload-262726/config.json ...
I0524 19:18:35.465874 98716 cache.go:195] Successfully downloaded all kic artifacts
I0524 19:18:35.465904 98716 start.go:364] acquiring machines lock for test-preload-262726: {Name:mk7d4981ff7dce8da894e9fe23513f11c9471c1a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0524 19:18:35.465969 98716 start.go:368] acquired machines lock for "test-preload-262726" in 49.087µs
I0524 19:18:35.465985 98716 start.go:96] Skipping create...Using existing machine configuration
I0524 19:18:35.465991 98716 fix.go:55] fixHost starting:
I0524 19:18:35.466259 98716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0524 19:18:35.466296 98716 main.go:141] libmachine: Launching plugin server for driver kvm2
I0524 19:18:35.480633 98716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43115
I0524 19:18:35.481045 98716 main.go:141] libmachine: () Calling .GetVersion
I0524 19:18:35.481565 98716 main.go:141] libmachine: Using API Version 1
I0524 19:18:35.481589 98716 main.go:141] libmachine: () Calling .SetConfigRaw
I0524 19:18:35.481955 98716 main.go:141] libmachine: () Calling .GetMachineName
I0524 19:18:35.482153 98716 main.go:141] libmachine: (test-preload-262726) Calling .DriverName
I0524 19:18:35.482319 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetState
I0524 19:18:35.484046 98716 fix.go:103] recreateIfNeeded on test-preload-262726: state=Stopped err=<nil>
I0524 19:18:35.484073 98716 main.go:141] libmachine: (test-preload-262726) Calling .DriverName
W0524 19:18:35.484245 98716 fix.go:129] unexpected machine state, will restart: <nil>
I0524 19:18:35.487766 98716 out.go:177] * Restarting existing kvm2 VM for "test-preload-262726" ...
I0524 19:18:35.489471 98716 main.go:141] libmachine: (test-preload-262726) Calling .Start
I0524 19:18:35.489627 98716 main.go:141] libmachine: (test-preload-262726) Ensuring networks are active...
I0524 19:18:35.490325 98716 main.go:141] libmachine: (test-preload-262726) Ensuring network default is active
I0524 19:18:35.490639 98716 main.go:141] libmachine: (test-preload-262726) Ensuring network mk-test-preload-262726 is active
I0524 19:18:35.490962 98716 main.go:141] libmachine: (test-preload-262726) Getting domain xml...
I0524 19:18:35.491640 98716 main.go:141] libmachine: (test-preload-262726) Creating domain...
I0524 19:18:36.666074 98716 main.go:141] libmachine: (test-preload-262726) Waiting to get IP...
I0524 19:18:36.666949 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:18:36.667317 98716 main.go:141] libmachine: (test-preload-262726) DBG | unable to find current IP address of domain test-preload-262726 in network mk-test-preload-262726
I0524 19:18:36.667422 98716 main.go:141] libmachine: (test-preload-262726) DBG | I0524 19:18:36.667310 98763 retry.go:31] will retry after 297.185972ms: waiting for machine to come up
I0524 19:18:36.965858 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:18:36.966308 98716 main.go:141] libmachine: (test-preload-262726) DBG | unable to find current IP address of domain test-preload-262726 in network mk-test-preload-262726
I0524 19:18:36.966340 98716 main.go:141] libmachine: (test-preload-262726) DBG | I0524 19:18:36.966250 98763 retry.go:31] will retry after 261.717109ms: waiting for machine to come up
I0524 19:18:37.229823 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:18:37.230184 98716 main.go:141] libmachine: (test-preload-262726) DBG | unable to find current IP address of domain test-preload-262726 in network mk-test-preload-262726
I0524 19:18:37.230218 98716 main.go:141] libmachine: (test-preload-262726) DBG | I0524 19:18:37.230123 98763 retry.go:31] will retry after 375.565654ms: waiting for machine to come up
I0524 19:18:37.607318 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:18:37.607766 98716 main.go:141] libmachine: (test-preload-262726) DBG | unable to find current IP address of domain test-preload-262726 in network mk-test-preload-262726
I0524 19:18:37.607797 98716 main.go:141] libmachine: (test-preload-262726) DBG | I0524 19:18:37.607722 98763 retry.go:31] will retry after 384.604355ms: waiting for machine to come up
I0524 19:18:37.994372 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:18:37.994877 98716 main.go:141] libmachine: (test-preload-262726) DBG | unable to find current IP address of domain test-preload-262726 in network mk-test-preload-262726
I0524 19:18:37.994910 98716 main.go:141] libmachine: (test-preload-262726) DBG | I0524 19:18:37.994816 98763 retry.go:31] will retry after 487.001687ms: waiting for machine to come up
I0524 19:18:38.483495 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:18:38.483968 98716 main.go:141] libmachine: (test-preload-262726) DBG | unable to find current IP address of domain test-preload-262726 in network mk-test-preload-262726
I0524 19:18:38.483999 98716 main.go:141] libmachine: (test-preload-262726) DBG | I0524 19:18:38.483897 98763 retry.go:31] will retry after 824.382758ms: waiting for machine to come up
I0524 19:18:39.309988 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:18:39.310430 98716 main.go:141] libmachine: (test-preload-262726) DBG | unable to find current IP address of domain test-preload-262726 in network mk-test-preload-262726
I0524 19:18:39.310462 98716 main.go:141] libmachine: (test-preload-262726) DBG | I0524 19:18:39.310374 98763 retry.go:31] will retry after 1.07159268s: waiting for machine to come up
I0524 19:18:40.383490 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:18:40.383886 98716 main.go:141] libmachine: (test-preload-262726) DBG | unable to find current IP address of domain test-preload-262726 in network mk-test-preload-262726
I0524 19:18:40.383922 98716 main.go:141] libmachine: (test-preload-262726) DBG | I0524 19:18:40.383821 98763 retry.go:31] will retry after 985.49653ms: waiting for machine to come up
I0524 19:18:41.371426 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:18:41.371846 98716 main.go:141] libmachine: (test-preload-262726) DBG | unable to find current IP address of domain test-preload-262726 in network mk-test-preload-262726
I0524 19:18:41.371870 98716 main.go:141] libmachine: (test-preload-262726) DBG | I0524 19:18:41.371796 98763 retry.go:31] will retry after 1.173158795s: waiting for machine to come up
I0524 19:18:42.546507 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:18:42.546918 98716 main.go:141] libmachine: (test-preload-262726) DBG | unable to find current IP address of domain test-preload-262726 in network mk-test-preload-262726
I0524 19:18:42.546954 98716 main.go:141] libmachine: (test-preload-262726) DBG | I0524 19:18:42.546861 98763 retry.go:31] will retry after 1.551327554s: waiting for machine to come up
I0524 19:18:44.100729 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:18:44.101259 98716 main.go:141] libmachine: (test-preload-262726) DBG | unable to find current IP address of domain test-preload-262726 in network mk-test-preload-262726
I0524 19:18:44.101298 98716 main.go:141] libmachine: (test-preload-262726) DBG | I0524 19:18:44.101167 98763 retry.go:31] will retry after 1.783324652s: waiting for machine to come up
I0524 19:18:45.886671 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:18:45.887101 98716 main.go:141] libmachine: (test-preload-262726) DBG | unable to find current IP address of domain test-preload-262726 in network mk-test-preload-262726
I0524 19:18:45.887124 98716 main.go:141] libmachine: (test-preload-262726) DBG | I0524 19:18:45.887054 98763 retry.go:31] will retry after 3.324585975s: waiting for machine to come up
I0524 19:18:49.215422 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:18:49.215775 98716 main.go:141] libmachine: (test-preload-262726) DBG | unable to find current IP address of domain test-preload-262726 in network mk-test-preload-262726
I0524 19:18:49.215797 98716 main.go:141] libmachine: (test-preload-262726) DBG | I0524 19:18:49.215727 98763 retry.go:31] will retry after 3.488218878s: waiting for machine to come up
I0524 19:18:52.706357 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:18:52.706848 98716 main.go:141] libmachine: (test-preload-262726) Found IP for machine: 192.168.39.12
I0524 19:18:52.706867 98716 main.go:141] libmachine: (test-preload-262726) Reserving static IP address...
I0524 19:18:52.706879 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has current primary IP address 192.168.39.12 and MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:18:52.707330 98716 main.go:141] libmachine: (test-preload-262726) DBG | found host DHCP lease matching {name: "test-preload-262726", mac: "52:54:00:2c:ce:d5", ip: "192.168.39.12"} in network mk-test-preload-262726: {Iface:virbr1 ExpiryTime:2023-05-24 20:18:47 +0000 UTC Type:0 Mac:52:54:00:2c:ce:d5 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:test-preload-262726 Clientid:01:52:54:00:2c:ce:d5}
I0524 19:18:52.707366 98716 main.go:141] libmachine: (test-preload-262726) DBG | skip adding static IP to network mk-test-preload-262726 - found existing host DHCP lease matching {name: "test-preload-262726", mac: "52:54:00:2c:ce:d5", ip: "192.168.39.12"}
I0524 19:18:52.707383 98716 main.go:141] libmachine: (test-preload-262726) Reserved static IP address: 192.168.39.12
I0524 19:18:52.707413 98716 main.go:141] libmachine: (test-preload-262726) Waiting for SSH to be available...
I0524 19:18:52.707445 98716 main.go:141] libmachine: (test-preload-262726) DBG | Getting to WaitForSSH function...
I0524 19:18:52.709527 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:18:52.709795 98716 main.go:141] libmachine: (test-preload-262726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:ce:d5", ip: ""} in network mk-test-preload-262726: {Iface:virbr1 ExpiryTime:2023-05-24 20:18:47 +0000 UTC Type:0 Mac:52:54:00:2c:ce:d5 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:test-preload-262726 Clientid:01:52:54:00:2c:ce:d5}
I0524 19:18:52.709827 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined IP address 192.168.39.12 and MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:18:52.709896 98716 main.go:141] libmachine: (test-preload-262726) DBG | Using SSH client type: external
I0524 19:18:52.709939 98716 main.go:141] libmachine: (test-preload-262726) DBG | Using SSH private key: /home/jenkins/minikube-integration/16573-71939/.minikube/machines/test-preload-262726/id_rsa (-rw-------)
I0524 19:18:52.709967 98716 main.go:141] libmachine: (test-preload-262726) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.12 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16573-71939/.minikube/machines/test-preload-262726/id_rsa -p 22] /usr/bin/ssh <nil>}
I0524 19:18:52.709980 98716 main.go:141] libmachine: (test-preload-262726) DBG | About to run SSH command:
I0524 19:18:52.709992 98716 main.go:141] libmachine: (test-preload-262726) DBG | exit 0
I0524 19:18:52.796422 98716 main.go:141] libmachine: (test-preload-262726) DBG | SSH cmd err, output: <nil>:
I0524 19:18:52.796747 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetConfigRaw
I0524 19:18:52.797377 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetIP
I0524 19:18:52.799667 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:18:52.799999 98716 main.go:141] libmachine: (test-preload-262726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:ce:d5", ip: ""} in network mk-test-preload-262726: {Iface:virbr1 ExpiryTime:2023-05-24 20:18:47 +0000 UTC Type:0 Mac:52:54:00:2c:ce:d5 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:test-preload-262726 Clientid:01:52:54:00:2c:ce:d5}
I0524 19:18:52.800032 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined IP address 192.168.39.12 and MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:18:52.800218 98716 profile.go:148] Saving config to /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/test-preload-262726/config.json ...
I0524 19:18:52.800406 98716 machine.go:88] provisioning docker machine ...
I0524 19:18:52.800441 98716 main.go:141] libmachine: (test-preload-262726) Calling .DriverName
I0524 19:18:52.800657 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetMachineName
I0524 19:18:52.800840 98716 buildroot.go:166] provisioning hostname "test-preload-262726"
I0524 19:18:52.800860 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetMachineName
I0524 19:18:52.800989 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHHostname
I0524 19:18:52.802923 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:18:52.803209 98716 main.go:141] libmachine: (test-preload-262726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:ce:d5", ip: ""} in network mk-test-preload-262726: {Iface:virbr1 ExpiryTime:2023-05-24 20:18:47 +0000 UTC Type:0 Mac:52:54:00:2c:ce:d5 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:test-preload-262726 Clientid:01:52:54:00:2c:ce:d5}
I0524 19:18:52.803236 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined IP address 192.168.39.12 and MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:18:52.803304 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHPort
I0524 19:18:52.803477 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHKeyPath
I0524 19:18:52.803634 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHKeyPath
I0524 19:18:52.803789 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHUsername
I0524 19:18:52.803946 98716 main.go:141] libmachine: Using SSH client type: native
I0524 19:18:52.804367 98716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil> [] 0s} 192.168.39.12 22 <nil> <nil>}
I0524 19:18:52.804379 98716 main.go:141] libmachine: About to run SSH command:
sudo hostname test-preload-262726 && echo "test-preload-262726" | sudo tee /etc/hostname
I0524 19:18:52.937910 98716 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-262726
I0524 19:18:52.937934 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHHostname
I0524 19:18:52.940161 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:18:52.940453 98716 main.go:141] libmachine: (test-preload-262726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:ce:d5", ip: ""} in network mk-test-preload-262726: {Iface:virbr1 ExpiryTime:2023-05-24 20:18:47 +0000 UTC Type:0 Mac:52:54:00:2c:ce:d5 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:test-preload-262726 Clientid:01:52:54:00:2c:ce:d5}
I0524 19:18:52.940484 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined IP address 192.168.39.12 and MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:18:52.940652 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHPort
I0524 19:18:52.940815 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHKeyPath
I0524 19:18:52.940976 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHKeyPath
I0524 19:18:52.941115 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHUsername
I0524 19:18:52.941288 98716 main.go:141] libmachine: Using SSH client type: native
I0524 19:18:52.941720 98716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil> [] 0s} 192.168.39.12 22 <nil> <nil>}
I0524 19:18:52.941740 98716 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\stest-preload-262726' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-262726/g' /etc/hosts;
else
echo '127.0.1.1 test-preload-262726' | sudo tee -a /etc/hosts;
fi
fi
I0524 19:18:53.064537 98716 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0524 19:18:53.064559 98716 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16573-71939/.minikube CaCertPath:/home/jenkins/minikube-integration/16573-71939/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16573-71939/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16573-71939/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16573-71939/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16573-71939/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16573-71939/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16573-71939/.minikube}
I0524 19:18:53.064578 98716 buildroot.go:174] setting up certificates
I0524 19:18:53.064588 98716 provision.go:83] configureAuth start
I0524 19:18:53.064597 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetMachineName
I0524 19:18:53.064824 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetIP
I0524 19:18:53.066716 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:18:53.067009 98716 main.go:141] libmachine: (test-preload-262726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:ce:d5", ip: ""} in network mk-test-preload-262726: {Iface:virbr1 ExpiryTime:2023-05-24 20:18:47 +0000 UTC Type:0 Mac:52:54:00:2c:ce:d5 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:test-preload-262726 Clientid:01:52:54:00:2c:ce:d5}
I0524 19:18:53.067040 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined IP address 192.168.39.12 and MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:18:53.067190 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHHostname
I0524 19:18:53.068978 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:18:53.069260 98716 main.go:141] libmachine: (test-preload-262726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:ce:d5", ip: ""} in network mk-test-preload-262726: {Iface:virbr1 ExpiryTime:2023-05-24 20:18:47 +0000 UTC Type:0 Mac:52:54:00:2c:ce:d5 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:test-preload-262726 Clientid:01:52:54:00:2c:ce:d5}
I0524 19:18:53.069294 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined IP address 192.168.39.12 and MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:18:53.069385 98716 provision.go:138] copyHostCerts
I0524 19:18:53.069444 98716 exec_runner.go:144] found /home/jenkins/minikube-integration/16573-71939/.minikube/key.pem, removing ...
I0524 19:18:53.069453 98716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16573-71939/.minikube/key.pem
I0524 19:18:53.069519 98716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16573-71939/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16573-71939/.minikube/key.pem (1679 bytes)
I0524 19:18:53.069618 98716 exec_runner.go:144] found /home/jenkins/minikube-integration/16573-71939/.minikube/ca.pem, removing ...
I0524 19:18:53.069633 98716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16573-71939/.minikube/ca.pem
I0524 19:18:53.069666 98716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16573-71939/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16573-71939/.minikube/ca.pem (1078 bytes)
I0524 19:18:53.069742 98716 exec_runner.go:144] found /home/jenkins/minikube-integration/16573-71939/.minikube/cert.pem, removing ...
I0524 19:18:53.069751 98716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16573-71939/.minikube/cert.pem
I0524 19:18:53.069781 98716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16573-71939/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16573-71939/.minikube/cert.pem (1123 bytes)
I0524 19:18:53.069893 98716 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16573-71939/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16573-71939/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16573-71939/.minikube/certs/ca-key.pem org=jenkins.test-preload-262726 san=[192.168.39.12 192.168.39.12 localhost 127.0.0.1 minikube test-preload-262726]
I0524 19:18:53.148825 98716 provision.go:172] copyRemoteCerts
I0524 19:18:53.148886 98716 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0524 19:18:53.148914 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHHostname
I0524 19:18:53.150893 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:18:53.151180 98716 main.go:141] libmachine: (test-preload-262726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:ce:d5", ip: ""} in network mk-test-preload-262726: {Iface:virbr1 ExpiryTime:2023-05-24 20:18:47 +0000 UTC Type:0 Mac:52:54:00:2c:ce:d5 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:test-preload-262726 Clientid:01:52:54:00:2c:ce:d5}
I0524 19:18:53.151213 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined IP address 192.168.39.12 and MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:18:53.151320 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHPort
I0524 19:18:53.151478 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHKeyPath
I0524 19:18:53.151600 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHUsername
I0524 19:18:53.151709 98716 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16573-71939/.minikube/machines/test-preload-262726/id_rsa Username:docker}
I0524 19:18:53.238277 98716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16573-71939/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0524 19:18:53.259310 98716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16573-71939/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I0524 19:18:53.279833 98716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16573-71939/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0524 19:18:53.300144 98716 provision.go:86] duration metric: configureAuth took 235.54757ms
I0524 19:18:53.300157 98716 buildroot.go:189] setting minikube options for container-runtime
I0524 19:18:53.300316 98716 config.go:182] Loaded profile config "test-preload-262726": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
I0524 19:18:53.300329 98716 machine.go:91] provisioned docker machine in 499.910757ms
I0524 19:18:53.300336 98716 start.go:300] post-start starting for "test-preload-262726" (driver="kvm2")
I0524 19:18:53.300343 98716 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0524 19:18:53.300378 98716 main.go:141] libmachine: (test-preload-262726) Calling .DriverName
I0524 19:18:53.300598 98716 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0524 19:18:53.300624 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHHostname
I0524 19:18:53.302734 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:18:53.303044 98716 main.go:141] libmachine: (test-preload-262726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:ce:d5", ip: ""} in network mk-test-preload-262726: {Iface:virbr1 ExpiryTime:2023-05-24 20:18:47 +0000 UTC Type:0 Mac:52:54:00:2c:ce:d5 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:test-preload-262726 Clientid:01:52:54:00:2c:ce:d5}
I0524 19:18:53.303070 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined IP address 192.168.39.12 and MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:18:53.303218 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHPort
I0524 19:18:53.303384 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHKeyPath
I0524 19:18:53.303576 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHUsername
I0524 19:18:53.303730 98716 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16573-71939/.minikube/machines/test-preload-262726/id_rsa Username:docker}
I0524 19:18:53.389780 98716 ssh_runner.go:195] Run: cat /etc/os-release
I0524 19:18:53.393476 98716 info.go:137] Remote host: Buildroot 2021.02.12
I0524 19:18:53.393493 98716 filesync.go:126] Scanning /home/jenkins/minikube-integration/16573-71939/.minikube/addons for local assets ...
I0524 19:18:53.393541 98716 filesync.go:126] Scanning /home/jenkins/minikube-integration/16573-71939/.minikube/files for local assets ...
I0524 19:18:53.393640 98716 filesync.go:149] local asset: /home/jenkins/minikube-integration/16573-71939/.minikube/files/etc/ssl/certs/791532.pem -> 791532.pem in /etc/ssl/certs
I0524 19:18:53.393740 98716 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0524 19:18:53.402117 98716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16573-71939/.minikube/files/etc/ssl/certs/791532.pem --> /etc/ssl/certs/791532.pem (1708 bytes)
I0524 19:18:53.422540 98716 start.go:303] post-start completed in 122.193902ms
I0524 19:18:53.422558 98716 fix.go:57] fixHost completed within 17.956566594s
I0524 19:18:53.422574 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHHostname
I0524 19:18:53.424632 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:18:53.424917 98716 main.go:141] libmachine: (test-preload-262726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:ce:d5", ip: ""} in network mk-test-preload-262726: {Iface:virbr1 ExpiryTime:2023-05-24 20:18:47 +0000 UTC Type:0 Mac:52:54:00:2c:ce:d5 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:test-preload-262726 Clientid:01:52:54:00:2c:ce:d5}
I0524 19:18:53.424949 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined IP address 192.168.39.12 and MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:18:53.425104 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHPort
I0524 19:18:53.425282 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHKeyPath
I0524 19:18:53.425445 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHKeyPath
I0524 19:18:53.425630 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHUsername
I0524 19:18:53.425792 98716 main.go:141] libmachine: Using SSH client type: native
I0524 19:18:53.426173 98716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil> [] 0s} 192.168.39.12 22 <nil> <nil>}
I0524 19:18:53.426184 98716 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0524 19:18:53.541180 98716 main.go:141] libmachine: SSH cmd err, output: <nil>: 1684955933.490809786
I0524 19:18:53.541202 98716 fix.go:207] guest clock: 1684955933.490809786
I0524 19:18:53.541211 98716 fix.go:220] Guest: 2023-05-24 19:18:53.490809786 +0000 UTC Remote: 2023-05-24 19:18:53.422562549 +0000 UTC m=+23.742028532 (delta=68.247237ms)
I0524 19:18:53.541240 98716 fix.go:191] guest clock delta is within tolerance: 68.247237ms
I0524 19:18:53.541247 98716 start.go:83] releasing machines lock for "test-preload-262726", held for 18.07526583s
I0524 19:18:53.541272 98716 main.go:141] libmachine: (test-preload-262726) Calling .DriverName
I0524 19:18:53.541465 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetIP
I0524 19:18:53.543460 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:18:53.543786 98716 main.go:141] libmachine: (test-preload-262726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:ce:d5", ip: ""} in network mk-test-preload-262726: {Iface:virbr1 ExpiryTime:2023-05-24 20:18:47 +0000 UTC Type:0 Mac:52:54:00:2c:ce:d5 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:test-preload-262726 Clientid:01:52:54:00:2c:ce:d5}
I0524 19:18:53.543817 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined IP address 192.168.39.12 and MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:18:53.543926 98716 main.go:141] libmachine: (test-preload-262726) Calling .DriverName
I0524 19:18:53.544310 98716 main.go:141] libmachine: (test-preload-262726) Calling .DriverName
I0524 19:18:53.544476 98716 main.go:141] libmachine: (test-preload-262726) Calling .DriverName
I0524 19:18:53.544587 98716 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0524 19:18:53.544642 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHHostname
I0524 19:18:53.544691 98716 ssh_runner.go:195] Run: cat /version.json
I0524 19:18:53.544711 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHHostname
I0524 19:18:53.547094 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:18:53.547376 98716 main.go:141] libmachine: (test-preload-262726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:ce:d5", ip: ""} in network mk-test-preload-262726: {Iface:virbr1 ExpiryTime:2023-05-24 20:18:47 +0000 UTC Type:0 Mac:52:54:00:2c:ce:d5 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:test-preload-262726 Clientid:01:52:54:00:2c:ce:d5}
I0524 19:18:53.547407 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:18:53.547432 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined IP address 192.168.39.12 and MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:18:53.547554 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHPort
I0524 19:18:53.547729 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHKeyPath
I0524 19:18:53.547753 98716 main.go:141] libmachine: (test-preload-262726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:ce:d5", ip: ""} in network mk-test-preload-262726: {Iface:virbr1 ExpiryTime:2023-05-24 20:18:47 +0000 UTC Type:0 Mac:52:54:00:2c:ce:d5 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:test-preload-262726 Clientid:01:52:54:00:2c:ce:d5}
I0524 19:18:53.547783 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined IP address 192.168.39.12 and MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:18:53.547896 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHUsername
I0524 19:18:53.547974 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHPort
I0524 19:18:53.548046 98716 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16573-71939/.minikube/machines/test-preload-262726/id_rsa Username:docker}
I0524 19:18:53.548123 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHKeyPath
I0524 19:18:53.548249 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHUsername
I0524 19:18:53.548382 98716 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16573-71939/.minikube/machines/test-preload-262726/id_rsa Username:docker}
I0524 19:18:53.684389 98716 ssh_runner.go:195] Run: systemctl --version
I0524 19:18:53.689668 98716 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0524 19:18:53.694680 98716 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0524 19:18:53.694730 98716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0524 19:18:53.709284 98716 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0524 19:18:53.709300 98716 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
I0524 19:18:53.709420 98716 ssh_runner.go:195] Run: sudo crictl images --output json
I0524 19:18:57.742348 98716 ssh_runner.go:235] Completed: sudo crictl images --output json: (4.03290368s)
I0524 19:18:57.742472 98716 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
I0524 19:18:57.742612 98716 ssh_runner.go:195] Run: which lz4
I0524 19:18:57.746729 98716 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I0524 19:18:57.750604 98716 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0524 19:18:57.750627 98716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16573-71939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (458696921 bytes)
I0524 19:18:59.372315 98716 containerd.go:547] Took 1.625617 seconds to copy over tarball
I0524 19:18:59.372389 98716 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I0524 19:19:02.385241 98716 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.012822185s)
I0524 19:19:02.385271 98716 containerd.go:554] Took 3.012929 seconds to extract the tarball
I0524 19:19:02.385284 98716 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0524 19:19:02.424067 98716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0524 19:19:02.529108 98716 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0524 19:19:02.547814 98716 start.go:481] detecting cgroup driver to use...
I0524 19:19:02.547878 98716 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0524 19:19:05.252817 98716 ssh_runner.go:235] Completed: sudo systemctl stop -f crio: (2.704911709s)
I0524 19:19:05.252887 98716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0524 19:19:05.265480 98716 docker.go:193] disabling cri-docker service (if available) ...
I0524 19:19:05.265529 98716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0524 19:19:05.278930 98716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0524 19:19:05.293499 98716 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0524 19:19:05.395811 98716 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0524 19:19:05.495400 98716 docker.go:209] disabling docker service ...
I0524 19:19:05.495454 98716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0524 19:19:05.507728 98716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0524 19:19:05.518146 98716 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0524 19:19:05.612172 98716 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0524 19:19:05.716557 98716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0524 19:19:05.727684 98716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0524 19:19:05.745291 98716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
I0524 19:19:05.754299 98716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0524 19:19:05.762964 98716 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0524 19:19:05.763010 98716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0524 19:19:05.771759 98716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0524 19:19:05.780491 98716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0524 19:19:05.789120 98716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0524 19:19:05.797799 98716 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0524 19:19:05.806673 98716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0524 19:19:05.815488 98716 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0524 19:19:05.823645 98716 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0524 19:19:05.823687 98716 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0524 19:19:05.835702 98716 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0524 19:19:05.843504 98716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0524 19:19:05.935697 98716 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0524 19:19:05.966686 98716 start.go:528] Will wait 60s for socket path /run/containerd/containerd.sock
I0524 19:19:05.966740 98716 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0524 19:19:05.973161 98716 retry.go:31] will retry after 861.302406ms: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0524 19:19:06.835221 98716 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0524 19:19:06.840643 98716 start.go:549] Will wait 60s for crictl version
I0524 19:19:06.840694 98716 ssh_runner.go:195] Run: which crictl
I0524 19:19:06.844272 98716 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0524 19:19:06.872918 98716 start.go:565] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.7.1
RuntimeApiVersion: v1alpha2
I0524 19:19:06.872977 98716 ssh_runner.go:195] Run: containerd --version
I0524 19:19:06.900106 98716 ssh_runner.go:195] Run: containerd --version
I0524 19:19:06.929162 98716 out.go:177] * Preparing Kubernetes v1.24.4 on containerd 1.7.1 ...
I0524 19:19:06.930962 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetIP
I0524 19:19:06.933493 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:19:06.933851 98716 main.go:141] libmachine: (test-preload-262726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:ce:d5", ip: ""} in network mk-test-preload-262726: {Iface:virbr1 ExpiryTime:2023-05-24 20:18:47 +0000 UTC Type:0 Mac:52:54:00:2c:ce:d5 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:test-preload-262726 Clientid:01:52:54:00:2c:ce:d5}
I0524 19:19:06.933877 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined IP address 192.168.39.12 and MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:19:06.934069 98716 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0524 19:19:06.937728 98716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0524 19:19:06.949942 98716 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
I0524 19:19:06.950018 98716 ssh_runner.go:195] Run: sudo crictl images --output json
I0524 19:19:06.983631 98716 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
I0524 19:19:06.983682 98716 ssh_runner.go:195] Run: which lz4
I0524 19:19:06.987261 98716 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I0524 19:19:06.990993 98716 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0524 19:19:06.991015 98716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16573-71939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (458696921 bytes)
I0524 19:19:08.698638 98716 containerd.go:547] Took 1.711399 seconds to copy over tarball
I0524 19:19:08.698705 98716 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I0524 19:19:11.658664 98716 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.959924019s)
I0524 19:19:11.658704 98716 containerd.go:554] Took 2.960039 seconds to extract the tarball
I0524 19:19:11.658716 98716 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0524 19:19:11.698408 98716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0524 19:19:11.799282 98716 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0524 19:19:11.825315 98716 ssh_runner.go:195] Run: sudo crictl images --output json
I0524 19:19:12.881184 98716 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.05582159s)
I0524 19:19:12.881318 98716 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
I0524 19:19:12.881334 98716 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
I0524 19:19:12.881436 98716 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I0524 19:19:12.881473 98716 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
I0524 19:19:12.881506 98716 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
I0524 19:19:12.881519 98716 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
I0524 19:19:12.881561 98716 image.go:134] retrieving image: registry.k8s.io/pause:3.7
I0524 19:19:12.881647 98716 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
I0524 19:19:12.881747 98716 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
I0524 19:19:12.881761 98716 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
I0524 19:19:12.882749 98716 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
I0524 19:19:12.882772 98716 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I0524 19:19:12.882748 98716 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
I0524 19:19:12.882751 98716 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
I0524 19:19:12.882827 98716 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
I0524 19:19:12.882813 98716 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
I0524 19:19:12.882861 98716 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
I0524 19:19:12.882886 98716 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
I0524 19:19:13.025844 98716 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/pause:3.7"
I0524 19:19:13.045120 98716 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/coredns/coredns:v1.8.6"
I0524 19:19:13.067797 98716 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
I0524 19:19:13.072020 98716 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/etcd:3.5.3-0"
I0524 19:19:13.073189 98716 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-apiserver:v1.24.4"
I0524 19:19:13.080694 98716 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-controller-manager:v1.24.4"
I0524 19:19:13.085467 98716 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-scheduler:v1.24.4"
I0524 19:19:13.089256 98716 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-proxy:v1.24.4"
I0524 19:19:13.603571 98716 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
I0524 19:19:13.603624 98716 cri.go:217] Removing image: registry.k8s.io/pause:3.7
I0524 19:19:13.603705 98716 ssh_runner.go:195] Run: which crictl
I0524 19:19:13.623893 98716 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
I0524 19:19:13.623938 98716 cri.go:217] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
I0524 19:19:13.623982 98716 ssh_runner.go:195] Run: which crictl
I0524 19:19:14.057874 98716 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
I0524 19:19:14.057921 98716 cri.go:217] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
I0524 19:19:14.057970 98716 ssh_runner.go:195] Run: which crictl
I0524 19:19:14.180242 98716 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/etcd:3.5.3-0": (1.108197523s)
I0524 19:19:14.180280 98716 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
I0524 19:19:14.180302 98716 cri.go:217] Removing image: registry.k8s.io/etcd:3.5.3-0
I0524 19:19:14.180342 98716 ssh_runner.go:195] Run: which crictl
I0524 19:19:14.180366 98716 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-apiserver:v1.24.4": (1.107150363s)
I0524 19:19:14.180423 98716 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
I0524 19:19:14.180459 98716 cri.go:217] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
I0524 19:19:14.180504 98716 ssh_runner.go:195] Run: which crictl
I0524 19:19:14.252373 98716 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-controller-manager:v1.24.4": (1.171655245s)
I0524 19:19:14.252412 98716 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
I0524 19:19:14.252432 98716 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-proxy:v1.24.4": (1.163156617s)
I0524 19:19:14.252443 98716 cri.go:217] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
I0524 19:19:14.252461 98716 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
I0524 19:19:14.252481 98716 cri.go:217] Removing image: registry.k8s.io/kube-proxy:v1.24.4
I0524 19:19:14.252484 98716 ssh_runner.go:195] Run: which crictl
I0524 19:19:14.252501 98716 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-scheduler:v1.24.4": (1.166999274s)
I0524 19:19:14.252543 98716 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
I0524 19:19:14.252577 98716 cri.go:217] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
I0524 19:19:14.252581 98716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
I0524 19:19:14.252506 98716 ssh_runner.go:195] Run: which crictl
I0524 19:19:14.252609 98716 ssh_runner.go:195] Run: which crictl
I0524 19:19:14.252790 98716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
I0524 19:19:14.252822 98716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
I0524 19:19:14.252894 98716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
I0524 19:19:14.252993 98716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
I0524 19:19:14.266711 98716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
I0524 19:19:14.266730 98716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
I0524 19:19:14.354112 98716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
I0524 19:19:14.354180 98716 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16573-71939/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
I0524 19:19:14.354286 98716 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
I0524 19:19:14.410683 98716 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16573-71939/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
I0524 19:19:14.410752 98716 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16573-71939/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
I0524 19:19:14.410807 98716 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
I0524 19:19:14.410835 98716 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16573-71939/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
I0524 19:19:14.410897 98716 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
I0524 19:19:14.410840 98716 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
I0524 19:19:14.410935 98716 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16573-71939/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
I0524 19:19:14.411012 98716 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
I0524 19:19:14.411017 98716 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16573-71939/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
I0524 19:19:14.411060 98716 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
I0524 19:19:14.415045 98716 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16573-71939/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
I0524 19:19:14.415107 98716 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
I0524 19:19:14.429235 98716 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16573-71939/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
I0524 19:19:14.429298 98716 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
I0524 19:19:14.429326 98716 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
I0524 19:19:14.429342 98716 containerd.go:269] Loading image: /var/lib/minikube/images/pause_3.7
I0524 19:19:14.429383 98716 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.7
I0524 19:19:14.429392 98716 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
I0524 19:19:14.429461 98716 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
I0524 19:19:14.429513 98716 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
I0524 19:19:14.429554 98716 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
I0524 19:19:14.429580 98716 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
I0524 19:19:14.431574 98716 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
I0524 19:19:14.434933 98716 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
I0524 19:19:14.544028 98716 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16573-71939/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
I0524 19:19:14.544073 98716 containerd.go:269] Loading image: /var/lib/minikube/images/storage-provisioner_v5
I0524 19:19:14.544130 98716 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
I0524 19:19:15.117549 98716 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16573-71939/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
I0524 19:19:15.117588 98716 containerd.go:269] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
I0524 19:19:15.117650 98716 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.24.4
I0524 19:19:15.729357 98716 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16573-71939/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
I0524 19:19:15.729396 98716 containerd.go:269] Loading image: /var/lib/minikube/images/coredns_v1.8.6
I0524 19:19:15.729484 98716 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6
I0524 19:19:16.084092 98716 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16573-71939/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
I0524 19:19:16.084137 98716 containerd.go:269] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
I0524 19:19:16.084209 98716 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0
I0524 19:19:17.790841 98716 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0: (1.706603502s)
I0524 19:19:17.790868 98716 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16573-71939/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
I0524 19:19:17.790919 98716 containerd.go:269] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
I0524 19:19:17.790973 98716 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.24.4
I0524 19:19:18.415328 98716 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16573-71939/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
I0524 19:19:18.415365 98716 containerd.go:269] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
I0524 19:19:18.415413 98716 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.24.4
I0524 19:19:18.789128 98716 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16573-71939/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
I0524 19:19:18.789181 98716 containerd.go:269] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
I0524 19:19:18.789253 98716 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.24.4
I0524 19:19:19.487694 98716 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16573-71939/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
I0524 19:19:19.487743 98716 cache_images.go:123] Successfully loaded all cached images
I0524 19:19:19.487750 98716 cache_images.go:92] LoadImages completed in 6.606406867s
I0524 19:19:19.487809 98716 ssh_runner.go:195] Run: sudo crictl info
I0524 19:19:19.516845 98716 cni.go:84] Creating CNI manager for ""
I0524 19:19:19.516864 98716 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
I0524 19:19:19.516887 98716 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0524 19:19:19.516914 98716 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.12 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-262726 NodeName:test-preload-262726 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.12"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.12 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0524 19:19:19.517090 98716 kubeadm.go:181] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.12
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "test-preload-262726"
kubeletExtraArgs:
node-ip: 192.168.39.12
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.12"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.24.4
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0524 19:19:19.517187 98716 kubeadm.go:971] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=test-preload-262726 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.12
[Install]
config:
{KubernetesVersion:v1.24.4 ClusterName:test-preload-262726 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0524 19:19:19.517255 98716 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
I0524 19:19:19.526645 98716 binaries.go:44] Found k8s binaries, skipping transfer
I0524 19:19:19.526713 98716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0524 19:19:19.535819 98716 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (392 bytes)
I0524 19:19:19.551040 98716 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0524 19:19:19.565363 98716 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2111 bytes)
I0524 19:19:19.580240 98716 ssh_runner.go:195] Run: grep 192.168.39.12 control-plane.minikube.internal$ /etc/hosts
I0524 19:19:19.583574 98716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.12 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0524 19:19:19.594021 98716 certs.go:56] Setting up /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/test-preload-262726 for IP: 192.168.39.12
I0524 19:19:19.594047 98716 certs.go:190] acquiring lock for shared ca certs: {Name:mk2a3f0918ca1ce5e8a6fdf9e7f174b68f929bde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0524 19:19:19.594179 98716 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16573-71939/.minikube/ca.key
I0524 19:19:19.594214 98716 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16573-71939/.minikube/proxy-client-ca.key
I0524 19:19:19.594325 98716 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/test-preload-262726/client.key
I0524 19:19:19.594383 98716 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/test-preload-262726/apiserver.key.6b589965
I0524 19:19:19.594429 98716 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/test-preload-262726/proxy-client.key
I0524 19:19:19.594540 98716 certs.go:437] found cert: /home/jenkins/minikube-integration/16573-71939/.minikube/certs/home/jenkins/minikube-integration/16573-71939/.minikube/certs/79153.pem (1338 bytes)
W0524 19:19:19.594566 98716 certs.go:433] ignoring /home/jenkins/minikube-integration/16573-71939/.minikube/certs/home/jenkins/minikube-integration/16573-71939/.minikube/certs/79153_empty.pem, impossibly tiny 0 bytes
I0524 19:19:19.594577 98716 certs.go:437] found cert: /home/jenkins/minikube-integration/16573-71939/.minikube/certs/home/jenkins/minikube-integration/16573-71939/.minikube/certs/ca-key.pem (1675 bytes)
I0524 19:19:19.594600 98716 certs.go:437] found cert: /home/jenkins/minikube-integration/16573-71939/.minikube/certs/home/jenkins/minikube-integration/16573-71939/.minikube/certs/ca.pem (1078 bytes)
I0524 19:19:19.594624 98716 certs.go:437] found cert: /home/jenkins/minikube-integration/16573-71939/.minikube/certs/home/jenkins/minikube-integration/16573-71939/.minikube/certs/cert.pem (1123 bytes)
I0524 19:19:19.594646 98716 certs.go:437] found cert: /home/jenkins/minikube-integration/16573-71939/.minikube/certs/home/jenkins/minikube-integration/16573-71939/.minikube/certs/key.pem (1679 bytes)
I0524 19:19:19.594682 98716 certs.go:437] found cert: /home/jenkins/minikube-integration/16573-71939/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16573-71939/.minikube/files/etc/ssl/certs/791532.pem (1708 bytes)
I0524 19:19:19.595319 98716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/test-preload-262726/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0524 19:19:19.617452 98716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/test-preload-262726/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0524 19:19:19.640282 98716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/test-preload-262726/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0524 19:19:19.662076 98716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/test-preload-262726/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0524 19:19:19.682763 98716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16573-71939/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0524 19:19:19.703266 98716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16573-71939/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0524 19:19:19.724673 98716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16573-71939/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0524 19:19:19.746052 98716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16573-71939/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0524 19:19:19.767686 98716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16573-71939/.minikube/files/etc/ssl/certs/791532.pem --> /usr/share/ca-certificates/791532.pem (1708 bytes)
I0524 19:19:19.789391 98716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16573-71939/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0524 19:19:19.810737 98716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16573-71939/.minikube/certs/79153.pem --> /usr/share/ca-certificates/79153.pem (1338 bytes)
I0524 19:19:19.832180 98716 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0524 19:19:19.847961 98716 ssh_runner.go:195] Run: openssl version
I0524 19:19:19.853621 98716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0524 19:19:19.864345 98716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0524 19:19:19.868793 98716 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 24 18:37 /usr/share/ca-certificates/minikubeCA.pem
I0524 19:19:19.868834 98716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0524 19:19:19.874165 98716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0524 19:19:19.884305 98716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/79153.pem && ln -fs /usr/share/ca-certificates/79153.pem /etc/ssl/certs/79153.pem"
I0524 19:19:19.894216 98716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/79153.pem
I0524 19:19:19.898501 98716 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 May 24 18:42 /usr/share/ca-certificates/79153.pem
I0524 19:19:19.898546 98716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/79153.pem
I0524 19:19:19.903884 98716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/79153.pem /etc/ssl/certs/51391683.0"
I0524 19:19:19.913585 98716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/791532.pem && ln -fs /usr/share/ca-certificates/791532.pem /etc/ssl/certs/791532.pem"
I0524 19:19:19.923527 98716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791532.pem
I0524 19:19:19.927840 98716 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 May 24 18:42 /usr/share/ca-certificates/791532.pem
I0524 19:19:19.927879 98716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791532.pem
I0524 19:19:19.933202 98716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/791532.pem /etc/ssl/certs/3ec20f2e.0"
I0524 19:19:19.943003 98716 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
I0524 19:19:19.947239 98716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0524 19:19:19.953025 98716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0524 19:19:19.958105 98716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0524 19:19:19.963277 98716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0524 19:19:19.968700 98716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0524 19:19:19.974073 98716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0524 19:19:19.979227 98716 kubeadm.go:404] StartCluster: {Name:test-preload-262726 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.24.4 ClusterName:test-preload-262726 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0524 19:19:19.979335 98716 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0524 19:19:19.979403 98716 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0524 19:19:20.015236 98716 cri.go:88] found id: ""
I0524 19:19:20.015296 98716 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0524 19:19:20.024701 98716 kubeadm.go:419] found existing configuration files, will attempt cluster restart
I0524 19:19:20.024723 98716 kubeadm.go:636] restartCluster start
I0524 19:19:20.024767 98716 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0524 19:19:20.033565 98716 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0524 19:19:20.034007 98716 kubeconfig.go:135] verify returned: extract IP: "test-preload-262726" does not appear in /home/jenkins/minikube-integration/16573-71939/kubeconfig
I0524 19:19:20.034139 98716 kubeconfig.go:146] "test-preload-262726" context is missing from /home/jenkins/minikube-integration/16573-71939/kubeconfig - will repair!
I0524 19:19:20.034447 98716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16573-71939/kubeconfig: {Name:mkca58267e892de3526cb65d43d387d65171cc36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0524 19:19:20.035132 98716 kapi.go:59] client config for test-preload-262726: &rest.Config{Host:"https://192.168.39.12:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16573-71939/.minikube/profiles/test-preload-262726/client.crt", KeyFile:"/home/jenkins/minikube-integration/16573-71939/.minikube/profiles/test-preload-262726/client.key", CAFile:"/home/jenkins/minikube-integration/16573-71939/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(n
il), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19b9380), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0524 19:19:20.035999 98716 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0524 19:19:20.044793 98716 api_server.go:166] Checking apiserver status ...
I0524 19:19:20.044834 98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0524 19:19:20.055664 98716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0524 19:19:20.556353 98716 api_server.go:166] Checking apiserver status ...
I0524 19:19:20.556419 98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0524 19:19:20.568045 98716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0524 19:19:21.056681 98716 api_server.go:166] Checking apiserver status ...
I0524 19:19:21.056772 98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0524 19:19:21.069278 98716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0524 19:19:21.555853 98716 api_server.go:166] Checking apiserver status ...
I0524 19:19:21.555970 98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0524 19:19:21.568752 98716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0524 19:19:22.056331 98716 api_server.go:166] Checking apiserver status ...
I0524 19:19:22.056406 98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0524 19:19:22.068085 98716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0524 19:19:22.556756 98716 api_server.go:166] Checking apiserver status ...
I0524 19:19:22.556817 98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0524 19:19:22.568129 98716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0524 19:19:23.056052 98716 api_server.go:166] Checking apiserver status ...
I0524 19:19:23.056114 98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0524 19:19:23.067406 98716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0524 19:19:23.556057 98716 api_server.go:166] Checking apiserver status ...
I0524 19:19:23.556157 98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0524 19:19:23.567724 98716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0524 19:19:24.056523 98716 api_server.go:166] Checking apiserver status ...
I0524 19:19:24.056611 98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0524 19:19:24.068114 98716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0524 19:19:24.555759 98716 api_server.go:166] Checking apiserver status ...
I0524 19:19:24.555836 98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0524 19:19:24.567084 98716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0524 19:19:25.056797 98716 api_server.go:166] Checking apiserver status ...
I0524 19:19:25.056894 98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0524 19:19:25.068735 98716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0524 19:19:25.556710 98716 api_server.go:166] Checking apiserver status ...
I0524 19:19:25.556797 98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0524 19:19:25.568294 98716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0524 19:19:26.055865 98716 api_server.go:166] Checking apiserver status ...
I0524 19:19:26.055955 98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0524 19:19:26.067763 98716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0524 19:19:26.556312 98716 api_server.go:166] Checking apiserver status ...
I0524 19:19:26.556383 98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0524 19:19:26.567798 98716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0524 19:19:27.056395 98716 api_server.go:166] Checking apiserver status ...
I0524 19:19:27.056488 98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0524 19:19:27.068396 98716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0524 19:19:27.555905 98716 api_server.go:166] Checking apiserver status ...
I0524 19:19:27.556029 98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0524 19:19:27.567809 98716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0524 19:19:28.056559 98716 api_server.go:166] Checking apiserver status ...
I0524 19:19:28.056631 98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0524 19:19:28.068009 98716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0524 19:19:28.556670 98716 api_server.go:166] Checking apiserver status ...
I0524 19:19:28.556756 98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0524 19:19:28.568312 98716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0524 19:19:29.055927 98716 api_server.go:166] Checking apiserver status ...
I0524 19:19:29.056100 98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0524 19:19:29.067625 98716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0524 19:19:29.556365 98716 api_server.go:166] Checking apiserver status ...
I0524 19:19:29.556441 98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0524 19:19:29.568980 98716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0524 19:19:30.045076 98716 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
I0524 19:19:30.045107 98716 kubeadm.go:1123] stopping kube-system containers ...
I0524 19:19:30.045121 98716 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I0524 19:19:30.045201 98716 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0524 19:19:30.074736 98716 cri.go:88] found id: ""
I0524 19:19:30.074798 98716 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0524 19:19:30.090262 98716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0524 19:19:30.099367 98716 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0524 19:19:30.099414 98716 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0524 19:19:30.108182 98716 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0524 19:19:30.108211 98716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0524 19:19:30.204505 98716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0524 19:19:30.687335 98716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0524 19:19:31.030674 98716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0524 19:19:31.093028 98716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0524 19:19:31.171001 98716 api_server.go:52] waiting for apiserver process to appear ...
I0524 19:19:31.171092 98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0524 19:19:31.685197 98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0524 19:19:32.185226 98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0524 19:19:32.685385 98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0524 19:19:33.185475 98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0524 19:19:33.217487 98716 api_server.go:72] duration metric: took 2.046486567s to wait for apiserver process to appear ...
I0524 19:19:33.217511 98716 api_server.go:88] waiting for apiserver healthz status ...
I0524 19:19:33.217530 98716 api_server.go:253] Checking apiserver healthz at https://192.168.39.12:8443/healthz ...
I0524 19:19:33.217946 98716 api_server.go:269] stopped: https://192.168.39.12:8443/healthz: Get "https://192.168.39.12:8443/healthz": dial tcp 192.168.39.12:8443: connect: connection refused
I0524 19:19:33.718589 98716 api_server.go:253] Checking apiserver healthz at https://192.168.39.12:8443/healthz ...
I0524 19:19:33.719145 98716 api_server.go:269] stopped: https://192.168.39.12:8443/healthz: Get "https://192.168.39.12:8443/healthz": dial tcp 192.168.39.12:8443: connect: connection refused
I0524 19:19:34.218496 98716 api_server.go:253] Checking apiserver healthz at https://192.168.39.12:8443/healthz ...
I0524 19:19:37.184754 98716 api_server.go:279] https://192.168.39.12:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0524 19:19:37.184788 98716 api_server.go:103] status: https://192.168.39.12:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0524 19:19:37.184803 98716 api_server.go:253] Checking apiserver healthz at https://192.168.39.12:8443/healthz ...
I0524 19:19:37.217459 98716 api_server.go:279] https://192.168.39.12:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0524 19:19:37.217501 98716 api_server.go:103] status: https://192.168.39.12:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0524 19:19:37.218534 98716 api_server.go:253] Checking apiserver healthz at https://192.168.39.12:8443/healthz ...
I0524 19:19:37.247324 98716 api_server.go:279] https://192.168.39.12:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0524 19:19:37.247351 98716 api_server.go:103] status: https://192.168.39.12:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0524 19:19:37.719107 98716 api_server.go:253] Checking apiserver healthz at https://192.168.39.12:8443/healthz ...
I0524 19:19:37.724328 98716 api_server.go:279] https://192.168.39.12:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0524 19:19:37.724358 98716 api_server.go:103] status: https://192.168.39.12:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0524 19:19:38.219071 98716 api_server.go:253] Checking apiserver healthz at https://192.168.39.12:8443/healthz ...
I0524 19:19:38.230626 98716 api_server.go:279] https://192.168.39.12:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0524 19:19:38.230657 98716 api_server.go:103] status: https://192.168.39.12:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0524 19:19:38.718183 98716 api_server.go:253] Checking apiserver healthz at https://192.168.39.12:8443/healthz ...
I0524 19:19:38.725084 98716 api_server.go:279] https://192.168.39.12:8443/healthz returned 200:
ok
I0524 19:19:38.733586 98716 api_server.go:141] control plane version: v1.24.4
I0524 19:19:38.733607 98716 api_server.go:131] duration metric: took 5.516088251s to wait for apiserver health ...
I0524 19:19:38.733619 98716 cni.go:84] Creating CNI manager for ""
I0524 19:19:38.733626 98716 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
I0524 19:19:38.735429 98716 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0524 19:19:38.736884 98716 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0524 19:19:38.750275 98716 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
I0524 19:19:38.792895 98716 system_pods.go:43] waiting for kube-system pods to appear ...
I0524 19:19:38.803215 98716 system_pods.go:59] 7 kube-system pods found
I0524 19:19:38.803257 98716 system_pods.go:61] "coredns-6d4b75cb6d-dl4xp" [e6216ab8-7bef-475e-a7bb-fab7bf2404f3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0524 19:19:38.803273 98716 system_pods.go:61] "etcd-test-preload-262726" [6abf0ad3-e222-42ce-a289-5ec1c18e5a8e] Running
I0524 19:19:38.803282 98716 system_pods.go:61] "kube-apiserver-test-preload-262726" [8c7daa0e-36a3-4b7e-b034-888b3e0173b8] Running
I0524 19:19:38.803291 98716 system_pods.go:61] "kube-controller-manager-test-preload-262726" [de8cc020-34e6-42cd-8770-fdd4c1f5504d] Running
I0524 19:19:38.803303 98716 system_pods.go:61] "kube-proxy-fdclm" [f135ae50-d2c1-4d46-9045-52bf968f5291] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0524 19:19:38.803318 98716 system_pods.go:61] "kube-scheduler-test-preload-262726" [5e394445-b631-46f2-9739-16bbd1d958ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0524 19:19:38.803331 98716 system_pods.go:61] "storage-provisioner" [19459c64-6c74-4bee-89aa-4db2436a469f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0524 19:19:38.803340 98716 system_pods.go:74] duration metric: took 10.424818ms to wait for pod list to return data ...
I0524 19:19:38.803353 98716 node_conditions.go:102] verifying NodePressure condition ...
I0524 19:19:38.807401 98716 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0524 19:19:38.807431 98716 node_conditions.go:123] node cpu capacity is 2
I0524 19:19:38.807441 98716 node_conditions.go:105] duration metric: took 4.080321ms to run NodePressure ...
I0524 19:19:38.807457 98716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0524 19:19:39.036398 98716 kubeadm.go:772] waiting for restarted kubelet to initialise ...
I0524 19:19:39.044153 98716 kubeadm.go:787] kubelet initialised
I0524 19:19:39.044183 98716 kubeadm.go:788] duration metric: took 7.757444ms waiting for restarted kubelet to initialise ...
I0524 19:19:39.044194 98716 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0524 19:19:39.050375 98716 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-dl4xp" in "kube-system" namespace to be "Ready" ...
I0524 19:19:39.056854 98716 pod_ready.go:97] node "test-preload-262726" hosting pod "coredns-6d4b75cb6d-dl4xp" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-262726" has status "Ready":"False"
I0524 19:19:39.056883 98716 pod_ready.go:81] duration metric: took 6.483427ms waiting for pod "coredns-6d4b75cb6d-dl4xp" in "kube-system" namespace to be "Ready" ...
E0524 19:19:39.056900 98716 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-262726" hosting pod "coredns-6d4b75cb6d-dl4xp" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-262726" has status "Ready":"False"
I0524 19:19:39.056917 98716 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-262726" in "kube-system" namespace to be "Ready" ...
I0524 19:19:39.061342 98716 pod_ready.go:97] node "test-preload-262726" hosting pod "etcd-test-preload-262726" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-262726" has status "Ready":"False"
I0524 19:19:39.061382 98716 pod_ready.go:81] duration metric: took 4.454845ms waiting for pod "etcd-test-preload-262726" in "kube-system" namespace to be "Ready" ...
E0524 19:19:39.061391 98716 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-262726" hosting pod "etcd-test-preload-262726" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-262726" has status "Ready":"False"
I0524 19:19:39.061401 98716 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-262726" in "kube-system" namespace to be "Ready" ...
I0524 19:19:39.065864 98716 pod_ready.go:97] node "test-preload-262726" hosting pod "kube-apiserver-test-preload-262726" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-262726" has status "Ready":"False"
I0524 19:19:39.065893 98716 pod_ready.go:81] duration metric: took 4.48328ms waiting for pod "kube-apiserver-test-preload-262726" in "kube-system" namespace to be "Ready" ...
E0524 19:19:39.065903 98716 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-262726" hosting pod "kube-apiserver-test-preload-262726" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-262726" has status "Ready":"False"
I0524 19:19:39.065912 98716 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-262726" in "kube-system" namespace to be "Ready" ...
I0524 19:19:39.198922 98716 pod_ready.go:97] node "test-preload-262726" hosting pod "kube-controller-manager-test-preload-262726" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-262726" has status "Ready":"False"
I0524 19:19:39.198954 98716 pod_ready.go:81] duration metric: took 133.026489ms waiting for pod "kube-controller-manager-test-preload-262726" in "kube-system" namespace to be "Ready" ...
E0524 19:19:39.198966 98716 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-262726" hosting pod "kube-controller-manager-test-preload-262726" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-262726" has status "Ready":"False"
I0524 19:19:39.198978 98716 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fdclm" in "kube-system" namespace to be "Ready" ...
I0524 19:19:39.595976 98716 pod_ready.go:97] node "test-preload-262726" hosting pod "kube-proxy-fdclm" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-262726" has status "Ready":"False"
I0524 19:19:39.596010 98716 pod_ready.go:81] duration metric: took 397.023508ms waiting for pod "kube-proxy-fdclm" in "kube-system" namespace to be "Ready" ...
E0524 19:19:39.596036 98716 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-262726" hosting pod "kube-proxy-fdclm" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-262726" has status "Ready":"False"
I0524 19:19:39.596050 98716 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-262726" in "kube-system" namespace to be "Ready" ...
I0524 19:19:39.997580 98716 pod_ready.go:97] node "test-preload-262726" hosting pod "kube-scheduler-test-preload-262726" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-262726" has status "Ready":"False"
I0524 19:19:39.997637 98716 pod_ready.go:81] duration metric: took 401.571361ms waiting for pod "kube-scheduler-test-preload-262726" in "kube-system" namespace to be "Ready" ...
E0524 19:19:39.997650 98716 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-262726" hosting pod "kube-scheduler-test-preload-262726" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-262726" has status "Ready":"False"
I0524 19:19:39.997662 98716 pod_ready.go:38] duration metric: took 953.453981ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0524 19:19:39.997700 98716 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0524 19:19:40.010093 98716 ops.go:34] apiserver oom_adj: -16
I0524 19:19:40.010116 98716 kubeadm.go:640] restartCluster took 19.985385643s
I0524 19:19:40.010126 98716 kubeadm.go:406] StartCluster complete in 20.030905076s
I0524 19:19:40.010146 98716 settings.go:142] acquiring lock: {Name:mk242f143a6a02c2ddba85b6f580593271dad784 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0524 19:19:40.010229 98716 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/16573-71939/kubeconfig
I0524 19:19:40.010836 98716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16573-71939/kubeconfig: {Name:mkca58267e892de3526cb65d43d387d65171cc36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0524 19:19:40.011099 98716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0524 19:19:40.011229 98716 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
I0524 19:19:40.011304 98716 addons.go:66] Setting storage-provisioner=true in profile "test-preload-262726"
I0524 19:19:40.011321 98716 addons.go:66] Setting default-storageclass=true in profile "test-preload-262726"
I0524 19:19:40.011348 98716 addons.go:228] Setting addon storage-provisioner=true in "test-preload-262726"
W0524 19:19:40.011362 98716 addons.go:237] addon storage-provisioner should already be in state true
I0524 19:19:40.011369 98716 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-262726"
I0524 19:19:40.011413 98716 host.go:66] Checking if "test-preload-262726" exists ...
I0524 19:19:40.011331 98716 config.go:182] Loaded profile config "test-preload-262726": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
I0524 19:19:40.011620 98716 kapi.go:59] client config for test-preload-262726: &rest.Config{Host:"https://192.168.39.12:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16573-71939/.minikube/profiles/test-preload-262726/client.crt", KeyFile:"/home/jenkins/minikube-integration/16573-71939/.minikube/profiles/test-preload-262726/client.key", CAFile:"/home/jenkins/minikube-integration/16573-71939/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(n
il), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19b9380), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0524 19:19:40.011780 98716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0524 19:19:40.011804 98716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0524 19:19:40.011827 98716 main.go:141] libmachine: Launching plugin server for driver kvm2
I0524 19:19:40.011840 98716 main.go:141] libmachine: Launching plugin server for driver kvm2
I0524 19:19:40.016367 98716 kapi.go:248] "coredns" deployment in "kube-system" namespace and "test-preload-262726" context rescaled to 1 replicas
I0524 19:19:40.016401 98716 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0524 19:19:40.018817 98716 out.go:177] * Verifying Kubernetes components...
I0524 19:19:40.020610 98716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0524 19:19:40.026396 98716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38619
I0524 19:19:40.026473 98716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41089
I0524 19:19:40.026796 98716 main.go:141] libmachine: () Calling .GetVersion
I0524 19:19:40.027052 98716 main.go:141] libmachine: () Calling .GetVersion
I0524 19:19:40.027353 98716 main.go:141] libmachine: Using API Version 1
I0524 19:19:40.027379 98716 main.go:141] libmachine: () Calling .SetConfigRaw
I0524 19:19:40.027532 98716 main.go:141] libmachine: Using API Version 1
I0524 19:19:40.027564 98716 main.go:141] libmachine: () Calling .SetConfigRaw
I0524 19:19:40.027673 98716 main.go:141] libmachine: () Calling .GetMachineName
I0524 19:19:40.027877 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetState
I0524 19:19:40.027913 98716 main.go:141] libmachine: () Calling .GetMachineName
I0524 19:19:40.028508 98716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0524 19:19:40.028578 98716 main.go:141] libmachine: Launching plugin server for driver kvm2
I0524 19:19:40.030482 98716 kapi.go:59] client config for test-preload-262726: &rest.Config{Host:"https://192.168.39.12:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16573-71939/.minikube/profiles/test-preload-262726/client.crt", KeyFile:"/home/jenkins/minikube-integration/16573-71939/.minikube/profiles/test-preload-262726/client.key", CAFile:"/home/jenkins/minikube-integration/16573-71939/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(n
il), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19b9380), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0524 19:19:40.039596 98716 addons.go:228] Setting addon default-storageclass=true in "test-preload-262726"
W0524 19:19:40.039619 98716 addons.go:237] addon default-storageclass should already be in state true
I0524 19:19:40.039643 98716 host.go:66] Checking if "test-preload-262726" exists ...
I0524 19:19:40.040035 98716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0524 19:19:40.040090 98716 main.go:141] libmachine: Launching plugin server for driver kvm2
I0524 19:19:40.043815 98716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41789
I0524 19:19:40.044224 98716 main.go:141] libmachine: () Calling .GetVersion
I0524 19:19:40.044748 98716 main.go:141] libmachine: Using API Version 1
I0524 19:19:40.044783 98716 main.go:141] libmachine: () Calling .SetConfigRaw
I0524 19:19:40.045100 98716 main.go:141] libmachine: () Calling .GetMachineName
I0524 19:19:40.045311 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetState
I0524 19:19:40.047036 98716 main.go:141] libmachine: (test-preload-262726) Calling .DriverName
I0524 19:19:40.049308 98716 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0524 19:19:40.051080 98716 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0524 19:19:40.051100 98716 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0524 19:19:40.051118 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHHostname
I0524 19:19:40.054545 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:19:40.055011 98716 main.go:141] libmachine: (test-preload-262726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:ce:d5", ip: ""} in network mk-test-preload-262726: {Iface:virbr1 ExpiryTime:2023-05-24 20:18:47 +0000 UTC Type:0 Mac:52:54:00:2c:ce:d5 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:test-preload-262726 Clientid:01:52:54:00:2c:ce:d5}
I0524 19:19:40.055051 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined IP address 192.168.39.12 and MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:19:40.055239 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHPort
I0524 19:19:40.055423 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHKeyPath
I0524 19:19:40.055630 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHUsername
I0524 19:19:40.055674 98716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44987
I0524 19:19:40.055789 98716 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16573-71939/.minikube/machines/test-preload-262726/id_rsa Username:docker}
I0524 19:19:40.056033 98716 main.go:141] libmachine: () Calling .GetVersion
I0524 19:19:40.056480 98716 main.go:141] libmachine: Using API Version 1
I0524 19:19:40.056501 98716 main.go:141] libmachine: () Calling .SetConfigRaw
I0524 19:19:40.056825 98716 main.go:141] libmachine: () Calling .GetMachineName
I0524 19:19:40.057255 98716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0524 19:19:40.057293 98716 main.go:141] libmachine: Launching plugin server for driver kvm2
I0524 19:19:40.071690 98716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33787
I0524 19:19:40.072020 98716 main.go:141] libmachine: () Calling .GetVersion
I0524 19:19:40.072458 98716 main.go:141] libmachine: Using API Version 1
I0524 19:19:40.072481 98716 main.go:141] libmachine: () Calling .SetConfigRaw
I0524 19:19:40.072807 98716 main.go:141] libmachine: () Calling .GetMachineName
I0524 19:19:40.073010 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetState
I0524 19:19:40.074702 98716 main.go:141] libmachine: (test-preload-262726) Calling .DriverName
I0524 19:19:40.074949 98716 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
I0524 19:19:40.074965 98716 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0524 19:19:40.074987 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHHostname
I0524 19:19:40.078268 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:19:40.078298 98716 main.go:141] libmachine: (test-preload-262726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:ce:d5", ip: ""} in network mk-test-preload-262726: {Iface:virbr1 ExpiryTime:2023-05-24 20:18:47 +0000 UTC Type:0 Mac:52:54:00:2c:ce:d5 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:test-preload-262726 Clientid:01:52:54:00:2c:ce:d5}
I0524 19:19:40.078329 98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined IP address 192.168.39.12 and MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
I0524 19:19:40.078510 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHPort
I0524 19:19:40.078692 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHKeyPath
I0524 19:19:40.078864 98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHUsername
I0524 19:19:40.079097 98716 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16573-71939/.minikube/machines/test-preload-262726/id_rsa Username:docker}
I0524 19:19:40.186328 98716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0524 19:19:40.212428 98716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0524 19:19:40.214579 98716 node_ready.go:35] waiting up to 6m0s for node "test-preload-262726" to be "Ready" ...
I0524 19:19:40.214707 98716 start.go:889] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0524 19:19:41.073998 98716 main.go:141] libmachine: Making call to close driver server
I0524 19:19:41.074026 98716 main.go:141] libmachine: (test-preload-262726) Calling .Close
I0524 19:19:41.074153 98716 main.go:141] libmachine: Making call to close driver server
I0524 19:19:41.074183 98716 main.go:141] libmachine: (test-preload-262726) Calling .Close
I0524 19:19:41.074327 98716 main.go:141] libmachine: (test-preload-262726) DBG | Closing plugin on server side
I0524 19:19:41.074399 98716 main.go:141] libmachine: Successfully made call to close driver server
I0524 19:19:41.074418 98716 main.go:141] libmachine: Making call to close connection to plugin binary
I0524 19:19:41.074433 98716 main.go:141] libmachine: Making call to close driver server
I0524 19:19:41.074446 98716 main.go:141] libmachine: (test-preload-262726) Calling .Close
I0524 19:19:41.074474 98716 main.go:141] libmachine: (test-preload-262726) DBG | Closing plugin on server side
I0524 19:19:41.074484 98716 main.go:141] libmachine: Successfully made call to close driver server
I0524 19:19:41.074502 98716 main.go:141] libmachine: Making call to close connection to plugin binary
I0524 19:19:41.074525 98716 main.go:141] libmachine: Making call to close driver server
I0524 19:19:41.074541 98716 main.go:141] libmachine: (test-preload-262726) Calling .Close
I0524 19:19:41.074672 98716 main.go:141] libmachine: Successfully made call to close driver server
I0524 19:19:41.074689 98716 main.go:141] libmachine: Making call to close connection to plugin binary
I0524 19:19:41.074702 98716 main.go:141] libmachine: Making call to close driver server
I0524 19:19:41.074711 98716 main.go:141] libmachine: (test-preload-262726) Calling .Close
I0524 19:19:41.074810 98716 main.go:141] libmachine: (test-preload-262726) DBG | Closing plugin on server side
I0524 19:19:41.074842 98716 main.go:141] libmachine: Successfully made call to close driver server
I0524 19:19:41.074852 98716 main.go:141] libmachine: Making call to close connection to plugin binary
I0524 19:19:41.074904 98716 main.go:141] libmachine: (test-preload-262726) DBG | Closing plugin on server side
I0524 19:19:41.074943 98716 main.go:141] libmachine: Successfully made call to close driver server
I0524 19:19:41.074963 98716 main.go:141] libmachine: Making call to close connection to plugin binary
I0524 19:19:41.077473 98716 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0524 19:19:41.079154 98716 addons.go:499] enable addons completed in 1.067932333s: enabled=[storage-provisioner default-storageclass]
I0524 19:19:42.220145 98716 node_ready.go:58] node "test-preload-262726" has status "Ready":"False"
I0524 19:19:44.220895 98716 node_ready.go:58] node "test-preload-262726" has status "Ready":"False"
I0524 19:19:46.222287 98716 node_ready.go:58] node "test-preload-262726" has status "Ready":"False"
I0524 19:19:47.721822 98716 node_ready.go:49] node "test-preload-262726" has status "Ready":"True"
I0524 19:19:47.721844 98716 node_ready.go:38] duration metric: took 7.507239286s waiting for node "test-preload-262726" to be "Ready" ...
I0524 19:19:47.721852 98716 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0524 19:19:47.729253 98716 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-dl4xp" in "kube-system" namespace to be "Ready" ...
I0524 19:19:47.734682 98716 pod_ready.go:92] pod "coredns-6d4b75cb6d-dl4xp" in "kube-system" namespace has status "Ready":"True"
I0524 19:19:47.734700 98716 pod_ready.go:81] duration metric: took 5.425367ms waiting for pod "coredns-6d4b75cb6d-dl4xp" in "kube-system" namespace to be "Ready" ...
I0524 19:19:47.734707 98716 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-262726" in "kube-system" namespace to be "Ready" ...
I0524 19:19:48.744478 98716 pod_ready.go:92] pod "etcd-test-preload-262726" in "kube-system" namespace has status "Ready":"True"
I0524 19:19:48.744504 98716 pod_ready.go:81] duration metric: took 1.009789802s waiting for pod "etcd-test-preload-262726" in "kube-system" namespace to be "Ready" ...
I0524 19:19:48.744514 98716 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-262726" in "kube-system" namespace to be "Ready" ...
I0524 19:19:48.748898 98716 pod_ready.go:92] pod "kube-apiserver-test-preload-262726" in "kube-system" namespace has status "Ready":"True"
I0524 19:19:48.748915 98716 pod_ready.go:81] duration metric: took 4.39372ms waiting for pod "kube-apiserver-test-preload-262726" in "kube-system" namespace to be "Ready" ...
I0524 19:19:48.748927 98716 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-262726" in "kube-system" namespace to be "Ready" ...
I0524 19:19:49.264578 98716 pod_ready.go:92] pod "kube-controller-manager-test-preload-262726" in "kube-system" namespace has status "Ready":"True"
I0524 19:19:49.264614 98716 pod_ready.go:81] duration metric: took 515.67186ms waiting for pod "kube-controller-manager-test-preload-262726" in "kube-system" namespace to be "Ready" ...
I0524 19:19:49.264628 98716 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fdclm" in "kube-system" namespace to be "Ready" ...
I0524 19:19:49.321513 98716 pod_ready.go:92] pod "kube-proxy-fdclm" in "kube-system" namespace has status "Ready":"True"
I0524 19:19:49.321530 98716 pod_ready.go:81] duration metric: took 56.896398ms waiting for pod "kube-proxy-fdclm" in "kube-system" namespace to be "Ready" ...
I0524 19:19:49.321538 98716 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-262726" in "kube-system" namespace to be "Ready" ...
I0524 19:19:51.728864 98716 pod_ready.go:102] pod "kube-scheduler-test-preload-262726" in "kube-system" namespace has status "Ready":"False"
I0524 19:19:52.726450 98716 pod_ready.go:92] pod "kube-scheduler-test-preload-262726" in "kube-system" namespace has status "Ready":"True"
I0524 19:19:52.726474 98716 pod_ready.go:81] duration metric: took 3.404929577s waiting for pod "kube-scheduler-test-preload-262726" in "kube-system" namespace to be "Ready" ...
I0524 19:19:52.726482 98716 pod_ready.go:38] duration metric: took 5.004621803s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0524 19:19:52.726498 98716 api_server.go:52] waiting for apiserver process to appear ...
I0524 19:19:52.726556 98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0524 19:19:52.739958 98716 api_server.go:72] duration metric: took 12.723528271s to wait for apiserver process to appear ...
I0524 19:19:52.739977 98716 api_server.go:88] waiting for apiserver healthz status ...
I0524 19:19:52.739992 98716 api_server.go:253] Checking apiserver healthz at https://192.168.39.12:8443/healthz ...
I0524 19:19:52.744881 98716 api_server.go:279] https://192.168.39.12:8443/healthz returned 200:
ok
I0524 19:19:52.745704 98716 api_server.go:141] control plane version: v1.24.4
I0524 19:19:52.745724 98716 api_server.go:131] duration metric: took 5.740658ms to wait for apiserver health ...
I0524 19:19:52.745731 98716 system_pods.go:43] waiting for kube-system pods to appear ...
I0524 19:19:52.751422 98716 system_pods.go:59] 7 kube-system pods found
I0524 19:19:52.751441 98716 system_pods.go:61] "coredns-6d4b75cb6d-dl4xp" [e6216ab8-7bef-475e-a7bb-fab7bf2404f3] Running
I0524 19:19:52.751446 98716 system_pods.go:61] "etcd-test-preload-262726" [6abf0ad3-e222-42ce-a289-5ec1c18e5a8e] Running
I0524 19:19:52.751450 98716 system_pods.go:61] "kube-apiserver-test-preload-262726" [8c7daa0e-36a3-4b7e-b034-888b3e0173b8] Running
I0524 19:19:52.751454 98716 system_pods.go:61] "kube-controller-manager-test-preload-262726" [de8cc020-34e6-42cd-8770-fdd4c1f5504d] Running
I0524 19:19:52.751458 98716 system_pods.go:61] "kube-proxy-fdclm" [f135ae50-d2c1-4d46-9045-52bf968f5291] Running
I0524 19:19:52.751464 98716 system_pods.go:61] "kube-scheduler-test-preload-262726" [5e394445-b631-46f2-9739-16bbd1d958ac] Running
I0524 19:19:52.751468 98716 system_pods.go:61] "storage-provisioner" [19459c64-6c74-4bee-89aa-4db2436a469f] Running
I0524 19:19:52.751473 98716 system_pods.go:74] duration metric: took 5.737408ms to wait for pod list to return data ...
I0524 19:19:52.751479 98716 default_sa.go:34] waiting for default service account to be created ...
I0524 19:19:52.753185 98716 default_sa.go:45] found service account: "default"
I0524 19:19:52.753202 98716 default_sa.go:55] duration metric: took 1.716393ms for default service account to be created ...
I0524 19:19:52.753210 98716 system_pods.go:116] waiting for k8s-apps to be running ...
I0524 19:19:52.757409 98716 system_pods.go:86] 7 kube-system pods found
I0524 19:19:52.757428 98716 system_pods.go:89] "coredns-6d4b75cb6d-dl4xp" [e6216ab8-7bef-475e-a7bb-fab7bf2404f3] Running
I0524 19:19:52.757436 98716 system_pods.go:89] "etcd-test-preload-262726" [6abf0ad3-e222-42ce-a289-5ec1c18e5a8e] Running
I0524 19:19:52.757442 98716 system_pods.go:89] "kube-apiserver-test-preload-262726" [8c7daa0e-36a3-4b7e-b034-888b3e0173b8] Running
I0524 19:19:52.757448 98716 system_pods.go:89] "kube-controller-manager-test-preload-262726" [de8cc020-34e6-42cd-8770-fdd4c1f5504d] Running
I0524 19:19:52.757453 98716 system_pods.go:89] "kube-proxy-fdclm" [f135ae50-d2c1-4d46-9045-52bf968f5291] Running
I0524 19:19:52.757460 98716 system_pods.go:89] "kube-scheduler-test-preload-262726" [5e394445-b631-46f2-9739-16bbd1d958ac] Running
I0524 19:19:52.757465 98716 system_pods.go:89] "storage-provisioner" [19459c64-6c74-4bee-89aa-4db2436a469f] Running
I0524 19:19:52.757472 98716 system_pods.go:126] duration metric: took 4.257212ms to wait for k8s-apps to be running ...
I0524 19:19:52.757482 98716 system_svc.go:44] waiting for kubelet service to be running ....
I0524 19:19:52.757522 98716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0524 19:19:52.772046 98716 system_svc.go:56] duration metric: took 14.56114ms WaitForService to wait for kubelet.
I0524 19:19:52.772062 98716 kubeadm.go:581] duration metric: took 12.75563501s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0524 19:19:52.772094 98716 node_conditions.go:102] verifying NodePressure condition ...
I0524 19:19:52.921919 98716 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0524 19:19:52.921945 98716 node_conditions.go:123] node cpu capacity is 2
I0524 19:19:52.921954 98716 node_conditions.go:105] duration metric: took 149.854657ms to run NodePressure ...
I0524 19:19:52.921964 98716 start.go:228] waiting for startup goroutines ...
I0524 19:19:52.921970 98716 start.go:233] waiting for cluster config update ...
I0524 19:19:52.921979 98716 start.go:242] writing updated cluster config ...
I0524 19:19:52.922225 98716 ssh_runner.go:195] Run: rm -f paused
I0524 19:19:52.970018 98716 start.go:568] kubectl: 1.27.2, cluster: 1.24.4 (minor skew: 3)
I0524 19:19:52.972074 98716 out.go:177]
W0524 19:19:52.973715 98716 out.go:239] ! /usr/local/bin/kubectl is version 1.27.2, which may have incompatibilities with Kubernetes 1.24.4.
I0524 19:19:52.975183 98716 out.go:177] - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
I0524 19:19:52.976679 98716 out.go:177] * Done! kubectl is now configured to use "test-preload-262726" cluster and "default" namespace by default
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
eb128cc2aa9b8 a4ca41631cc7a 7 seconds ago Running coredns 1 d61c04fc36dad
523278fedeee7 6e38f40d628db 14 seconds ago Running storage-provisioner 1 23220399aa9fd
2d99e8fb77a0e 7a53d1e08ef58 14 seconds ago Running kube-proxy 1 4e38a9d2fc98a
29eed419b3c13 03fa22539fc1c 20 seconds ago Running kube-scheduler 1 53b6a904ae609
74fd7b94b3d16 aebe758cef4cd 21 seconds ago Running etcd 1 87a1cf416a684
6775fb5cd8bfe 1f99cb6da9a82 21 seconds ago Running kube-controller-manager 1 8875b1315c18d
eaac0ac39a864 6cab9d1bed1be 21 seconds ago Running kube-apiserver 1 8abb6ba3c904f
*
* ==> containerd <==
* -- Journal begins at Wed 2023-05-24 19:18:46 UTC, ends at Wed 2023-05-24 19:19:53 UTC. --
May 24 19:19:39 test-preload-262726 containerd[689]: time="2023-05-24T19:19:39.072738227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fdclm,Uid:f135ae50-d2c1-4d46-9045-52bf968f5291,Namespace:kube-system,Attempt:0,}"
May 24 19:19:39 test-preload-262726 containerd[689]: time="2023-05-24T19:19:39.117100613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
May 24 19:19:39 test-preload-262726 containerd[689]: time="2023-05-24T19:19:39.117176425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
May 24 19:19:39 test-preload-262726 containerd[689]: time="2023-05-24T19:19:39.117190430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
May 24 19:19:39 test-preload-262726 containerd[689]: time="2023-05-24T19:19:39.117202126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
May 24 19:19:39 test-preload-262726 containerd[689]: time="2023-05-24T19:19:39.189400977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fdclm,Uid:f135ae50-d2c1-4d46-9045-52bf968f5291,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e38a9d2fc98a2e012390c9312aafe412574ea6c8f7aa72c786be600f3c76dc7\""
May 24 19:19:39 test-preload-262726 containerd[689]: time="2023-05-24T19:19:39.192447705Z" level=info msg="CreateContainer within sandbox \"4e38a9d2fc98a2e012390c9312aafe412574ea6c8f7aa72c786be600f3c76dc7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:1,}"
May 24 19:19:39 test-preload-262726 containerd[689]: time="2023-05-24T19:19:39.276066374Z" level=info msg="CreateContainer within sandbox \"4e38a9d2fc98a2e012390c9312aafe412574ea6c8f7aa72c786be600f3c76dc7\" for &ContainerMetadata{Name:kube-proxy,Attempt:1,} returns container id \"2d99e8fb77a0ebbffef6c61791dcd1b02242d6827bdb40ec9a43b2504c38617d\""
May 24 19:19:39 test-preload-262726 containerd[689]: time="2023-05-24T19:19:39.276490178Z" level=info msg="StartContainer for \"2d99e8fb77a0ebbffef6c61791dcd1b02242d6827bdb40ec9a43b2504c38617d\""
May 24 19:19:39 test-preload-262726 containerd[689]: time="2023-05-24T19:19:39.525002226Z" level=info msg="StartContainer for \"2d99e8fb77a0ebbffef6c61791dcd1b02242d6827bdb40ec9a43b2504c38617d\" returns successfully"
May 24 19:19:39 test-preload-262726 containerd[689]: time="2023-05-24T19:19:39.686791066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:19459c64-6c74-4bee-89aa-4db2436a469f,Namespace:kube-system,Attempt:0,} returns sandbox id \"23220399aa9fda62e9a01f6ff2cc1278ab199048d5846c964d42272d64185618\""
May 24 19:19:39 test-preload-262726 containerd[689]: time="2023-05-24T19:19:39.693673401Z" level=info msg="CreateContainer within sandbox \"23220399aa9fda62e9a01f6ff2cc1278ab199048d5846c964d42272d64185618\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:1,}"
May 24 19:19:39 test-preload-262726 containerd[689]: time="2023-05-24T19:19:39.723799600Z" level=info msg="CreateContainer within sandbox \"23220399aa9fda62e9a01f6ff2cc1278ab199048d5846c964d42272d64185618\" for &ContainerMetadata{Name:storage-provisioner,Attempt:1,} returns container id \"523278fedeee7b83a54fb109111c3dfd57a2e1c068a412dacca609e5a13090d9\""
May 24 19:19:39 test-preload-262726 containerd[689]: time="2023-05-24T19:19:39.728652018Z" level=info msg="StartContainer for \"523278fedeee7b83a54fb109111c3dfd57a2e1c068a412dacca609e5a13090d9\""
May 24 19:19:39 test-preload-262726 containerd[689]: time="2023-05-24T19:19:39.815369515Z" level=info msg="StartContainer for \"523278fedeee7b83a54fb109111c3dfd57a2e1c068a412dacca609e5a13090d9\" returns successfully"
May 24 19:19:46 test-preload-262726 containerd[689]: time="2023-05-24T19:19:46.179857822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6d4b75cb6d-dl4xp,Uid:e6216ab8-7bef-475e-a7bb-fab7bf2404f3,Namespace:kube-system,Attempt:0,}"
May 24 19:19:46 test-preload-262726 containerd[689]: time="2023-05-24T19:19:46.296405835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
May 24 19:19:46 test-preload-262726 containerd[689]: time="2023-05-24T19:19:46.296563390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
May 24 19:19:46 test-preload-262726 containerd[689]: time="2023-05-24T19:19:46.297053879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
May 24 19:19:46 test-preload-262726 containerd[689]: time="2023-05-24T19:19:46.297220098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
May 24 19:19:46 test-preload-262726 containerd[689]: time="2023-05-24T19:19:46.617144821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6d4b75cb6d-dl4xp,Uid:e6216ab8-7bef-475e-a7bb-fab7bf2404f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"d61c04fc36dad41da1f3fd121a1e9a58b3303b81b637296d8748b73384294c15\""
May 24 19:19:46 test-preload-262726 containerd[689]: time="2023-05-24T19:19:46.626349352Z" level=info msg="CreateContainer within sandbox \"d61c04fc36dad41da1f3fd121a1e9a58b3303b81b637296d8748b73384294c15\" for container &ContainerMetadata{Name:coredns,Attempt:1,}"
May 24 19:19:46 test-preload-262726 containerd[689]: time="2023-05-24T19:19:46.657778040Z" level=info msg="CreateContainer within sandbox \"d61c04fc36dad41da1f3fd121a1e9a58b3303b81b637296d8748b73384294c15\" for &ContainerMetadata{Name:coredns,Attempt:1,} returns container id \"eb128cc2aa9b8ce0860feb86125448f00b7e048f303ab1fbeabe9cd91a3dbe66\""
May 24 19:19:46 test-preload-262726 containerd[689]: time="2023-05-24T19:19:46.658449880Z" level=info msg="StartContainer for \"eb128cc2aa9b8ce0860feb86125448f00b7e048f303ab1fbeabe9cd91a3dbe66\""
May 24 19:19:46 test-preload-262726 containerd[689]: time="2023-05-24T19:19:46.734080862Z" level=info msg="StartContainer for \"eb128cc2aa9b8ce0860feb86125448f00b7e048f303ab1fbeabe9cd91a3dbe66\" returns successfully"
*
* ==> coredns [eb128cc2aa9b8ce0860feb86125448f00b7e048f303ab1fbeabe9cd91a3dbe66] <==
* .:53
[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
CoreDNS-1.8.6
linux/amd64, go1.17.1, 13a9191
[INFO] 127.0.0.1:35090 - 53711 "HINFO IN 2317895724519841220.6243386728523230990. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026718903s
*
* ==> describe nodes <==
* Name: test-preload-262726
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=test-preload-262726
kubernetes.io/os=linux
minikube.k8s.io/commit=d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e
minikube.k8s.io/name=test-preload-262726
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_05_24T19_16_01_0700
minikube.k8s.io/version=v1.30.1
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 24 May 2023 19:15:58 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: test-preload-262726
AcquireTime: <unset>
RenewTime: Wed, 24 May 2023 19:19:47 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 24 May 2023 19:19:47 +0000 Wed, 24 May 2023 19:15:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 24 May 2023 19:19:47 +0000 Wed, 24 May 2023 19:15:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 24 May 2023 19:19:47 +0000 Wed, 24 May 2023 19:15:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 24 May 2023 19:19:47 +0000 Wed, 24 May 2023 19:19:47 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.12
Hostname: test-preload-262726
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
System Info:
Machine ID: 71fb17bf05d74f43b2840a45b1dbe120
System UUID: 71fb17bf-05d7-4f43-b284-0a45b1dbe120
Boot ID: 46a523bd-8a48-41d6-abf4-2ff01283005e
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.7.1
Kubelet Version: v1.24.4
Kube-Proxy Version: v1.24.4
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-6d4b75cb6d-dl4xp 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 3m39s
kube-system etcd-test-preload-262726 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 3m53s
kube-system kube-apiserver-test-preload-262726 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m53s
kube-system kube-controller-manager-test-preload-262726 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m53s
kube-system kube-proxy-fdclm 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m39s
kube-system kube-scheduler-test-preload-262726 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m53s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m37s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (8%!)(MISSING) 170Mi (8%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 14s kube-proxy
Normal Starting 3m36s kube-proxy
Normal NodeHasSufficientMemory 4m2s (x5 over 4m2s) kubelet Node test-preload-262726 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m2s (x4 over 4m2s) kubelet Node test-preload-262726 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m2s (x4 over 4m2s) kubelet Node test-preload-262726 status is now: NodeHasSufficientPID
Normal Starting 3m53s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 3m53s kubelet Node test-preload-262726 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 3m53s kubelet Node test-preload-262726 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 3m53s kubelet Node test-preload-262726 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 3m53s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 3m43s kubelet Node test-preload-262726 status is now: NodeReady
Normal RegisteredNode 3m40s node-controller Node test-preload-262726 event: Registered Node test-preload-262726 in Controller
Normal Starting 23s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 23s (x8 over 23s) kubelet Node test-preload-262726 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 23s (x8 over 23s) kubelet Node test-preload-262726 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 23s (x7 over 23s) kubelet Node test-preload-262726 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 23s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 5s node-controller Node test-preload-262726 event: Registered Node test-preload-262726 in Controller
*
* ==> dmesg <==
* [May24 19:18] You have booted with nomodeset. This means your GPU drivers are DISABLED
[ +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[ +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
[ +0.070534] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +3.948715] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +3.321789] systemd-fstab-generator[114]: Ignoring "noauto" for root device
[ +0.143964] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
[ +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +2.605264] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[May24 19:19] systemd-fstab-generator[528]: Ignoring "noauto" for root device
[ +2.867542] systemd-fstab-generator[556]: Ignoring "noauto" for root device
[ +0.103604] systemd-fstab-generator[567]: Ignoring "noauto" for root device
[ +0.116088] systemd-fstab-generator[580]: Ignoring "noauto" for root device
[ +0.096673] systemd-fstab-generator[591]: Ignoring "noauto" for root device
[ +0.225058] systemd-fstab-generator[619]: Ignoring "noauto" for root device
[ +5.860960] systemd-fstab-generator[680]: Ignoring "noauto" for root device
[ +19.217176] systemd-fstab-generator[1023]: Ignoring "noauto" for root device
[ +8.869960] kauditd_printk_skb: 7 callbacks suppressed
[ +6.419096] kauditd_printk_skb: 8 callbacks suppressed
*
* ==> etcd [74fd7b94b3d161546000746309dad04a8441fc4b1e0c77eed524691371150bac] <==
* {"level":"info","ts":"2023-05-24T19:19:33.456Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"ab0e927fe14112bb","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
{"level":"info","ts":"2023-05-24T19:19:33.459Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
{"level":"info","ts":"2023-05-24T19:19:33.462Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ab0e927fe14112bb switched to configuration voters=(12325950308097266363)"}
{"level":"info","ts":"2023-05-24T19:19:33.462Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"5f0195cf24a31222","local-member-id":"ab0e927fe14112bb","added-peer-id":"ab0e927fe14112bb","added-peer-peer-urls":["https://192.168.39.12:2380"]}
{"level":"info","ts":"2023-05-24T19:19:33.462Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"5f0195cf24a31222","local-member-id":"ab0e927fe14112bb","cluster-version":"3.5"}
{"level":"info","ts":"2023-05-24T19:19:33.462Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-05-24T19:19:33.464Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-05-24T19:19:33.465Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ab0e927fe14112bb","initial-advertise-peer-urls":["https://192.168.39.12:2380"],"listen-peer-urls":["https://192.168.39.12:2380"],"advertise-client-urls":["https://192.168.39.12:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.12:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-05-24T19:19:33.465Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-05-24T19:19:33.465Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.12:2380"}
{"level":"info","ts":"2023-05-24T19:19:33.465Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.12:2380"}
{"level":"info","ts":"2023-05-24T19:19:34.527Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ab0e927fe14112bb is starting a new election at term 2"}
{"level":"info","ts":"2023-05-24T19:19:34.527Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ab0e927fe14112bb became pre-candidate at term 2"}
{"level":"info","ts":"2023-05-24T19:19:34.528Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ab0e927fe14112bb received MsgPreVoteResp from ab0e927fe14112bb at term 2"}
{"level":"info","ts":"2023-05-24T19:19:34.528Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ab0e927fe14112bb became candidate at term 3"}
{"level":"info","ts":"2023-05-24T19:19:34.528Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ab0e927fe14112bb received MsgVoteResp from ab0e927fe14112bb at term 3"}
{"level":"info","ts":"2023-05-24T19:19:34.528Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ab0e927fe14112bb became leader at term 3"}
{"level":"info","ts":"2023-05-24T19:19:34.528Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ab0e927fe14112bb elected leader ab0e927fe14112bb at term 3"}
{"level":"info","ts":"2023-05-24T19:19:34.528Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ab0e927fe14112bb","local-member-attributes":"{Name:test-preload-262726 ClientURLs:[https://192.168.39.12:2379]}","request-path":"/0/members/ab0e927fe14112bb/attributes","cluster-id":"5f0195cf24a31222","publish-timeout":"7s"}
{"level":"info","ts":"2023-05-24T19:19:34.529Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-05-24T19:19:34.529Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-05-24T19:19:34.531Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-05-24T19:19:34.533Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.12:2379"}
{"level":"info","ts":"2023-05-24T19:19:34.543Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-05-24T19:19:34.543Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
*
* ==> kernel <==
* 19:19:54 up 1 min, 0 users, load average: 1.25, 0.39, 0.14
Linux test-preload-262726 5.10.57 #1 SMP Sat May 20 03:22:25 UTC 2023 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kube-apiserver [eaac0ac39a864752ab541e0b5eb1fec135493b9db624a5d5d1226d653a68a2f4] <==
* I0524 19:19:37.162095 1 establishing_controller.go:76] Starting EstablishingController
I0524 19:19:37.162141 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0524 19:19:37.162155 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0524 19:19:37.162198 1 crd_finalizer.go:266] Starting CRDFinalizer
I0524 19:19:37.163448 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0524 19:19:37.163558 1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
I0524 19:19:37.250988 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
I0524 19:19:37.254173 1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
I0524 19:19:37.263819 1 shared_informer.go:262] Caches are synced for crd-autoregister
E0524 19:19:37.268428 1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
I0524 19:19:37.303770 1 shared_informer.go:262] Caches are synced for node_authorizer
I0524 19:19:37.329475 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0524 19:19:37.332119 1 apf_controller.go:322] Running API Priority and Fairness config worker
I0524 19:19:37.332378 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0524 19:19:37.332133 1 cache.go:39] Caches are synced for autoregister controller
I0524 19:19:37.765332 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0524 19:19:38.142791 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0524 19:19:38.941379 1 controller.go:611] quota admission added evaluator for: serviceaccounts
I0524 19:19:38.951899 1 controller.go:611] quota admission added evaluator for: deployments.apps
I0524 19:19:38.993821 1 controller.go:611] quota admission added evaluator for: daemonsets.apps
I0524 19:19:39.014108 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0524 19:19:39.021378 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0524 19:19:39.829929 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
I0524 19:19:49.959412 1 controller.go:611] quota admission added evaluator for: endpoints
I0524 19:19:49.976687 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
*
* ==> kube-controller-manager [6775fb5cd8bfe62b27eab28d4469d6e0cd1e09ee65ad65b8d533bf1080ea6e76] <==
* I0524 19:19:49.913236 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone:
W0524 19:19:49.913336 1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-262726. Assuming now as a timestamp.
I0524 19:19:49.913503 1 node_lifecycle_controller.go:1215] Controller detected that zone is now in state Normal.
I0524 19:19:49.913762 1 shared_informer.go:262] Caches are synced for TTL
I0524 19:19:49.914003 1 taint_manager.go:187] "Starting NoExecuteTaintManager"
I0524 19:19:49.914401 1 event.go:294] "Event occurred" object="test-preload-262726" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-262726 event: Registered Node test-preload-262726 in Controller"
I0524 19:19:49.917763 1 shared_informer.go:262] Caches are synced for job
I0524 19:19:49.922903 1 shared_informer.go:262] Caches are synced for PV protection
I0524 19:19:49.946387 1 shared_informer.go:262] Caches are synced for endpoint
I0524 19:19:49.962003 1 shared_informer.go:262] Caches are synced for HPA
I0524 19:19:49.965496 1 shared_informer.go:262] Caches are synced for endpoint_slice
I0524 19:19:49.970757 1 shared_informer.go:262] Caches are synced for GC
I0524 19:19:49.975824 1 shared_informer.go:262] Caches are synced for expand
I0524 19:19:49.981377 1 shared_informer.go:262] Caches are synced for PVC protection
I0524 19:19:50.012777 1 shared_informer.go:262] Caches are synced for ephemeral
I0524 19:19:50.021454 1 shared_informer.go:262] Caches are synced for persistent volume
I0524 19:19:50.033156 1 shared_informer.go:262] Caches are synced for attach detach
I0524 19:19:50.081131 1 shared_informer.go:262] Caches are synced for resource quota
I0524 19:19:50.089554 1 shared_informer.go:262] Caches are synced for stateful set
I0524 19:19:50.094022 1 shared_informer.go:262] Caches are synced for disruption
I0524 19:19:50.094055 1 disruption.go:371] Sending events to api server.
I0524 19:19:50.126016 1 shared_informer.go:262] Caches are synced for resource quota
I0524 19:19:50.548255 1 shared_informer.go:262] Caches are synced for garbage collector
I0524 19:19:50.548408 1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0524 19:19:50.577828 1 shared_informer.go:262] Caches are synced for garbage collector
*
* ==> kube-proxy [2d99e8fb77a0ebbffef6c61791dcd1b02242d6827bdb40ec9a43b2504c38617d] <==
* I0524 19:19:39.730687 1 node.go:163] Successfully retrieved node IP: 192.168.39.12
I0524 19:19:39.730749 1 server_others.go:138] "Detected node IP" address="192.168.39.12"
I0524 19:19:39.730779 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0524 19:19:39.810902 1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I0524 19:19:39.810964 1 server_others.go:206] "Using iptables Proxier"
I0524 19:19:39.813002 1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0524 19:19:39.817936 1 server.go:661] "Version info" version="v1.24.4"
I0524 19:19:39.817976 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0524 19:19:39.822060 1 config.go:226] "Starting endpoint slice config controller"
I0524 19:19:39.822412 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I0524 19:19:39.822441 1 config.go:317] "Starting service config controller"
I0524 19:19:39.822447 1 shared_informer.go:255] Waiting for caches to sync for service config
I0524 19:19:39.824890 1 config.go:444] "Starting node config controller"
I0524 19:19:39.824898 1 shared_informer.go:255] Waiting for caches to sync for node config
I0524 19:19:39.923535 1 shared_informer.go:262] Caches are synced for service config
I0524 19:19:39.923898 1 shared_informer.go:262] Caches are synced for endpoint slice config
I0524 19:19:39.925637 1 shared_informer.go:262] Caches are synced for node config
*
* ==> kube-scheduler [29eed419b3c13e6e2bf515c9caadf1be7cfdc191696e0ad2353dd6fa4f9ece9c] <==
* I0524 19:19:34.744263 1 serving.go:348] Generated self-signed cert in-memory
W0524 19:19:37.187292 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0524 19:19:37.187584 1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0524 19:19:37.187998 1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
W0524 19:19:37.188182 1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0524 19:19:37.252032 1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
I0524 19:19:37.252080 1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0524 19:19:37.253794 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0524 19:19:37.257802 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0524 19:19:37.258058 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0524 19:19:37.258237 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0524 19:19:37.359371 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Journal begins at Wed 2023-05-24 19:18:46 UTC, ends at Wed 2023-05-24 19:19:54 UTC. --
May 24 19:19:37 test-preload-262726 kubelet[1029]: I0524 19:19:37.305006 1029 setters.go:532] "Node became not ready" node="test-preload-262726" condition={Type:Ready Status:False LastHeartbeatTime:2023-05-24 19:19:37.304764733 +0000 UTC m=+6.289326905 LastTransitionTime:2023-05-24 19:19:37.304764733 +0000 UTC m=+6.289326905 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized}
May 24 19:19:37 test-preload-262726 kubelet[1029]: E0524 19:19:37.335933 1029 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 24 19:19:38 test-preload-262726 kubelet[1029]: I0524 19:19:38.160479 1029 apiserver.go:52] "Watching apiserver"
May 24 19:19:38 test-preload-262726 kubelet[1029]: I0524 19:19:38.163926 1029 topology_manager.go:200] "Topology Admit Handler"
May 24 19:19:38 test-preload-262726 kubelet[1029]: I0524 19:19:38.164081 1029 topology_manager.go:200] "Topology Admit Handler"
May 24 19:19:38 test-preload-262726 kubelet[1029]: I0524 19:19:38.164157 1029 topology_manager.go:200] "Topology Admit Handler"
May 24 19:19:38 test-preload-262726 kubelet[1029]: E0524 19:19:38.165895 1029 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6d4b75cb6d-dl4xp" podUID=e6216ab8-7bef-475e-a7bb-fab7bf2404f3
May 24 19:19:38 test-preload-262726 kubelet[1029]: I0524 19:19:38.234510 1029 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84m27\" (UniqueName: \"kubernetes.io/projected/e6216ab8-7bef-475e-a7bb-fab7bf2404f3-kube-api-access-84m27\") pod \"coredns-6d4b75cb6d-dl4xp\" (UID: \"e6216ab8-7bef-475e-a7bb-fab7bf2404f3\") " pod="kube-system/coredns-6d4b75cb6d-dl4xp"
May 24 19:19:38 test-preload-262726 kubelet[1029]: I0524 19:19:38.234543 1029 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f135ae50-d2c1-4d46-9045-52bf968f5291-lib-modules\") pod \"kube-proxy-fdclm\" (UID: \"f135ae50-d2c1-4d46-9045-52bf968f5291\") " pod="kube-system/kube-proxy-fdclm"
May 24 19:19:38 test-preload-262726 kubelet[1029]: I0524 19:19:38.234565 1029 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmhkr\" (UniqueName: \"kubernetes.io/projected/19459c64-6c74-4bee-89aa-4db2436a469f-kube-api-access-tmhkr\") pod \"storage-provisioner\" (UID: \"19459c64-6c74-4bee-89aa-4db2436a469f\") " pod="kube-system/storage-provisioner"
May 24 19:19:38 test-preload-262726 kubelet[1029]: I0524 19:19:38.234585 1029 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6216ab8-7bef-475e-a7bb-fab7bf2404f3-config-volume\") pod \"coredns-6d4b75cb6d-dl4xp\" (UID: \"e6216ab8-7bef-475e-a7bb-fab7bf2404f3\") " pod="kube-system/coredns-6d4b75cb6d-dl4xp"
May 24 19:19:38 test-preload-262726 kubelet[1029]: I0524 19:19:38.234662 1029 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f135ae50-d2c1-4d46-9045-52bf968f5291-kube-proxy\") pod \"kube-proxy-fdclm\" (UID: \"f135ae50-d2c1-4d46-9045-52bf968f5291\") " pod="kube-system/kube-proxy-fdclm"
May 24 19:19:38 test-preload-262726 kubelet[1029]: I0524 19:19:38.234682 1029 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f135ae50-d2c1-4d46-9045-52bf968f5291-xtables-lock\") pod \"kube-proxy-fdclm\" (UID: \"f135ae50-d2c1-4d46-9045-52bf968f5291\") " pod="kube-system/kube-proxy-fdclm"
May 24 19:19:38 test-preload-262726 kubelet[1029]: I0524 19:19:38.234700 1029 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l69pk\" (UniqueName: \"kubernetes.io/projected/f135ae50-d2c1-4d46-9045-52bf968f5291-kube-api-access-l69pk\") pod \"kube-proxy-fdclm\" (UID: \"f135ae50-d2c1-4d46-9045-52bf968f5291\") " pod="kube-system/kube-proxy-fdclm"
May 24 19:19:38 test-preload-262726 kubelet[1029]: I0524 19:19:38.234720 1029 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/19459c64-6c74-4bee-89aa-4db2436a469f-tmp\") pod \"storage-provisioner\" (UID: \"19459c64-6c74-4bee-89aa-4db2436a469f\") " pod="kube-system/storage-provisioner"
May 24 19:19:38 test-preload-262726 kubelet[1029]: I0524 19:19:38.234728 1029 reconciler.go:159] "Reconciler: start to sync state"
May 24 19:19:38 test-preload-262726 kubelet[1029]: E0524 19:19:38.336196 1029 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
May 24 19:19:38 test-preload-262726 kubelet[1029]: E0524 19:19:38.336293 1029 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/e6216ab8-7bef-475e-a7bb-fab7bf2404f3-config-volume podName:e6216ab8-7bef-475e-a7bb-fab7bf2404f3 nodeName:}" failed. No retries permitted until 2023-05-24 19:19:38.836273418 +0000 UTC m=+7.820835599 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e6216ab8-7bef-475e-a7bb-fab7bf2404f3-config-volume") pod "coredns-6d4b75cb6d-dl4xp" (UID: "e6216ab8-7bef-475e-a7bb-fab7bf2404f3") : object "kube-system"/"coredns" not registered
May 24 19:19:38 test-preload-262726 kubelet[1029]: E0524 19:19:38.839996 1029 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
May 24 19:19:38 test-preload-262726 kubelet[1029]: E0524 19:19:38.840075 1029 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/e6216ab8-7bef-475e-a7bb-fab7bf2404f3-config-volume podName:e6216ab8-7bef-475e-a7bb-fab7bf2404f3 nodeName:}" failed. No retries permitted until 2023-05-24 19:19:39.840060409 +0000 UTC m=+8.824622585 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e6216ab8-7bef-475e-a7bb-fab7bf2404f3-config-volume") pod "coredns-6d4b75cb6d-dl4xp" (UID: "e6216ab8-7bef-475e-a7bb-fab7bf2404f3") : object "kube-system"/"coredns" not registered
May 24 19:19:39 test-preload-262726 kubelet[1029]: E0524 19:19:39.848252 1029 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
May 24 19:19:39 test-preload-262726 kubelet[1029]: E0524 19:19:39.848356 1029 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/e6216ab8-7bef-475e-a7bb-fab7bf2404f3-config-volume podName:e6216ab8-7bef-475e-a7bb-fab7bf2404f3 nodeName:}" failed. No retries permitted until 2023-05-24 19:19:41.848333901 +0000 UTC m=+10.832896074 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e6216ab8-7bef-475e-a7bb-fab7bf2404f3-config-volume") pod "coredns-6d4b75cb6d-dl4xp" (UID: "e6216ab8-7bef-475e-a7bb-fab7bf2404f3") : object "kube-system"/"coredns" not registered
May 24 19:19:40 test-preload-262726 kubelet[1029]: E0524 19:19:40.268308 1029 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6d4b75cb6d-dl4xp" podUID=e6216ab8-7bef-475e-a7bb-fab7bf2404f3
May 24 19:19:41 test-preload-262726 kubelet[1029]: E0524 19:19:41.862953 1029 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
May 24 19:19:41 test-preload-262726 kubelet[1029]: E0524 19:19:41.863399 1029 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/e6216ab8-7bef-475e-a7bb-fab7bf2404f3-config-volume podName:e6216ab8-7bef-475e-a7bb-fab7bf2404f3 nodeName:}" failed. No retries permitted until 2023-05-24 19:19:45.863378995 +0000 UTC m=+14.847941156 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e6216ab8-7bef-475e-a7bb-fab7bf2404f3-config-volume") pod "coredns-6d4b75cb6d-dl4xp" (UID: "e6216ab8-7bef-475e-a7bb-fab7bf2404f3") : object "kube-system"/"coredns" not registered
*
* ==> storage-provisioner [523278fedeee7b83a54fb109111c3dfd57a2e1c068a412dacca609e5a13090d9] <==
* I0524 19:19:39.877154 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-262726 -n test-preload-262726
helpers_test.go:261: (dbg) Run: kubectl --context test-preload-262726 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-262726" profile ...
helpers_test.go:178: (dbg) Run: out/minikube-linux-amd64 delete -p test-preload-262726
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-262726: (1.246236545s)
--- FAIL: TestPreload (297.25s)