=== RUN TestOffline
=== PAUSE TestOffline
=== CONT TestOffline
aab_offline_test.go:55: (dbg) Run: out/minikube-windows-amd64.exe start -p offline-docker-012800 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p offline-docker-012800 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: exit status 90 (5m55.5665083s)
-- stdout --
* [offline-docker-012800] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5247 Build 19045.5247
- KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
- MINIKUBE_LOCATION=20151
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "offline-docker-012800" primary control-plane node in "offline-docker-012800" cluster
* Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
* Found network options:
- HTTP_PROXY=172.16.1.1:1
* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
- HTTP_PROXY=172.16.1.1:1
-- /stdout --
** stderr **
I0120 13:22:23.235628 8400 out.go:345] Setting OutFile to fd 1684 ...
I0120 13:22:23.362381 8400 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 13:22:23.362441 8400 out.go:358] Setting ErrFile to fd 1672...
I0120 13:22:23.362441 8400 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 13:22:23.395622 8400 out.go:352] Setting JSON to false
I0120 13:22:23.403451 8400 start.go:129] hostinfo: {"hostname":"minikube6","uptime":1778368,"bootTime":1735600974,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5247 Build 19045.5247","kernelVersion":"10.0.19045.5247 Build 19045.5247","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
W0120 13:22:23.403528 8400 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0120 13:22:23.410391 8400 out.go:177] * [offline-docker-012800] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5247 Build 19045.5247
I0120 13:22:23.433298 8400 notify.go:220] Checking for updates...
I0120 13:22:23.438287 8400 out.go:177] - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
I0120 13:22:23.449308 8400 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0120 13:22:23.458314 8400 out.go:177] - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
I0120 13:22:23.464296 8400 out.go:177] - MINIKUBE_LOCATION=20151
I0120 13:22:23.471550 8400 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0120 13:22:23.475000 8400 driver.go:394] Setting default libvirt URI to qemu:///system
I0120 13:22:29.974578 8400 out.go:177] * Using the hyperv driver based on user configuration
I0120 13:22:29.978517 8400 start.go:297] selected driver: hyperv
I0120 13:22:29.978517 8400 start.go:901] validating driver "hyperv" against <nil>
I0120 13:22:29.978517 8400 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0120 13:22:30.039129 8400 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0120 13:22:30.039885 8400 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0120 13:22:30.039885 8400 cni.go:84] Creating CNI manager for ""
I0120 13:22:30.039885 8400 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0120 13:22:30.039885 8400 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0120 13:22:30.040898 8400 start.go:340] cluster config:
{Name:offline-docker-012800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:offline-docker-012800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0120 13:22:30.040898 8400 iso.go:125] acquiring lock: {Name:mk49322cc4182124f5e9cd1631076166a7ff463d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 13:22:30.046604 8400 out.go:177] * Starting "offline-docker-012800" primary control-plane node in "offline-docker-012800" cluster
I0120 13:22:30.051376 8400 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime docker
I0120 13:22:30.051671 8400 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.0-docker-overlay2-amd64.tar.lz4
I0120 13:22:30.051671 8400 cache.go:56] Caching tarball of preloaded images
I0120 13:22:30.051964 8400 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0120 13:22:30.051964 8400 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on docker
I0120 13:22:30.053418 8400 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\offline-docker-012800\config.json ...
I0120 13:22:30.054030 8400 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\offline-docker-012800\config.json: {Name:mked1d13da6f7f18a095587511f7731a2f16f3b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 13:22:30.055039 8400 start.go:360] acquireMachinesLock for offline-docker-012800: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0120 13:24:42.922629 8400 start.go:364] duration metric: took 2m12.8663143s to acquireMachinesLock for "offline-docker-012800"
I0120 13:24:42.922851 8400 start.go:93] Provisioning new machine with config: &{Name:offline-docker-012800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:offline-doc
ker-012800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I0120 13:24:42.922851 8400 start.go:125] createHost starting for "" (driver="hyperv")
I0120 13:24:42.927460 8400 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
I0120 13:24:42.927805 8400 start.go:159] libmachine.API.Create for "offline-docker-012800" (driver="hyperv")
I0120 13:24:42.927805 8400 client.go:168] LocalClient.Create starting
I0120 13:24:42.928855 8400 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
I0120 13:24:42.928855 8400 main.go:141] libmachine: Decoding PEM data...
I0120 13:24:42.928855 8400 main.go:141] libmachine: Parsing certificate...
I0120 13:24:42.929391 8400 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
I0120 13:24:42.929682 8400 main.go:141] libmachine: Decoding PEM data...
I0120 13:24:42.929777 8400 main.go:141] libmachine: Parsing certificate...
I0120 13:24:42.929904 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
I0120 13:24:44.903852 8400 main.go:141] libmachine: [stdout =====>] : Hyper-V
I0120 13:24:44.904683 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:24:44.904758 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
I0120 13:24:46.685638 8400 main.go:141] libmachine: [stdout =====>] : False
I0120 13:24:46.685638 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:24:46.685638 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
I0120 13:24:48.279908 8400 main.go:141] libmachine: [stdout =====>] : True
I0120 13:24:48.280264 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:24:48.280264 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
I0120 13:24:52.127700 8400 main.go:141] libmachine: [stdout =====>] : [
{
"Id": "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
"Name": "Default Switch",
"SwitchType": 1
}
]
I0120 13:24:52.127700 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:24:52.130487 8400 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
I0120 13:24:52.616477 8400 main.go:141] libmachine: Creating SSH key...
I0120 13:24:53.297267 8400 main.go:141] libmachine: Creating VM...
I0120 13:24:53.297267 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
I0120 13:24:56.383167 8400 main.go:141] libmachine: [stdout =====>] : [
{
"Id": "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
"Name": "Default Switch",
"SwitchType": 1
}
]
I0120 13:24:56.383304 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:24:56.383423 8400 main.go:141] libmachine: Using switch "Default Switch"
I0120 13:24:56.383594 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
I0120 13:24:58.223483 8400 main.go:141] libmachine: [stdout =====>] : True
I0120 13:24:58.223568 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:24:58.223605 8400 main.go:141] libmachine: Creating VHD
I0120 13:24:58.223757 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\offline-docker-012800\fixed.vhd' -SizeBytes 10MB -Fixed
I0120 13:25:02.659054 8400 main.go:141] libmachine: [stdout =====>] :
ComputerName : minikube6
Path : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\offline-docker-012800\fixe
d.vhd
VhdFormat : VHD
VhdType : Fixed
FileSize : 10486272
Size : 10485760
MinimumSize :
LogicalSectorSize : 512
PhysicalSectorSize : 512
BlockSize : 0
ParentPath :
DiskIdentifier : CC1D2AF0-7ADA-4C1B-91ED-E28158E679A1
FragmentationPercentage : 0
Alignment : 1
Attached : False
DiskNumber :
IsPMEMCompatible : False
AddressAbstractionType : None
Number :
I0120 13:25:02.659135 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:25:02.659226 8400 main.go:141] libmachine: Writing magic tar header
I0120 13:25:02.659226 8400 main.go:141] libmachine: Writing SSH key tar header
I0120 13:25:02.674896 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\offline-docker-012800\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\offline-docker-012800\disk.vhd' -VHDType Dynamic -DeleteSource
I0120 13:25:06.315234 8400 main.go:141] libmachine: [stdout =====>] :
I0120 13:25:06.315307 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:25:06.315439 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\offline-docker-012800\disk.vhd' -SizeBytes 20000MB
I0120 13:25:09.096372 8400 main.go:141] libmachine: [stdout =====>] :
I0120 13:25:09.096774 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:25:09.096962 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM offline-docker-012800 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\offline-docker-012800' -SwitchName 'Default Switch' -MemoryStartupBytes 2048MB
I0120 13:25:13.093903 8400 main.go:141] libmachine: [stdout =====>] :
Name State CPUUsage(%) MemoryAssigned(M) Uptime Status Version
---- ----- ----------- ----------------- ------ ------ -------
offline-docker-012800 Off 0 0 00:00:00 Operating normally 9.0
I0120 13:25:13.093903 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:25:13.094328 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName offline-docker-012800 -DynamicMemoryEnabled $false
I0120 13:25:15.494799 8400 main.go:141] libmachine: [stdout =====>] :
I0120 13:25:15.494799 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:25:15.495347 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor offline-docker-012800 -Count 2
I0120 13:25:17.799433 8400 main.go:141] libmachine: [stdout =====>] :
I0120 13:25:17.799537 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:25:17.799639 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName offline-docker-012800 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\offline-docker-012800\boot2docker.iso'
I0120 13:25:20.527720 8400 main.go:141] libmachine: [stdout =====>] :
I0120 13:25:20.528600 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:25:20.528782 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName offline-docker-012800 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\offline-docker-012800\disk.vhd'
I0120 13:25:23.224717 8400 main.go:141] libmachine: [stdout =====>] :
I0120 13:25:23.225844 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:25:23.225844 8400 main.go:141] libmachine: Starting VM...
I0120 13:25:23.225918 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM offline-docker-012800
I0120 13:25:26.525738 8400 main.go:141] libmachine: [stdout =====>] :
I0120 13:25:26.526103 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:25:26.526103 8400 main.go:141] libmachine: Waiting for host to start...
I0120 13:25:26.526201 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-012800 ).state
I0120 13:25:28.865844 8400 main.go:141] libmachine: [stdout =====>] : Running
I0120 13:25:28.865844 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:25:28.866929 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-012800 ).networkadapters[0]).ipaddresses[0]
I0120 13:25:31.462563 8400 main.go:141] libmachine: [stdout =====>] :
I0120 13:25:31.462660 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:25:32.463927 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-012800 ).state
I0120 13:25:34.772694 8400 main.go:141] libmachine: [stdout =====>] : Running
I0120 13:25:34.773687 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:25:34.773687 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-012800 ).networkadapters[0]).ipaddresses[0]
I0120 13:25:37.575584 8400 main.go:141] libmachine: [stdout =====>] :
I0120 13:25:37.575584 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:25:38.575722 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-012800 ).state
I0120 13:25:41.076617 8400 main.go:141] libmachine: [stdout =====>] : Running
I0120 13:25:41.076694 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:25:41.076960 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-012800 ).networkadapters[0]).ipaddresses[0]
I0120 13:25:43.952092 8400 main.go:141] libmachine: [stdout =====>] :
I0120 13:25:43.952092 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:25:44.952947 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-012800 ).state
I0120 13:25:47.281558 8400 main.go:141] libmachine: [stdout =====>] : Running
I0120 13:25:47.281558 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:25:47.281863 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-012800 ).networkadapters[0]).ipaddresses[0]
I0120 13:25:50.071180 8400 main.go:141] libmachine: [stdout =====>] :
I0120 13:25:50.071180 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:25:51.073591 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-012800 ).state
I0120 13:25:53.542597 8400 main.go:141] libmachine: [stdout =====>] : Running
I0120 13:25:53.542792 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:25:53.542792 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-012800 ).networkadapters[0]).ipaddresses[0]
I0120 13:25:56.249098 8400 main.go:141] libmachine: [stdout =====>] : 172.29.102.139
I0120 13:25:56.249098 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:25:56.249098 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-012800 ).state
I0120 13:25:58.506107 8400 main.go:141] libmachine: [stdout =====>] : Running
I0120 13:25:58.506107 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:25:58.506263 8400 machine.go:93] provisionDockerMachine start ...
I0120 13:25:58.506349 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-012800 ).state
I0120 13:26:00.741695 8400 main.go:141] libmachine: [stdout =====>] : Running
I0120 13:26:00.741695 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:26:00.741695 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-012800 ).networkadapters[0]).ipaddresses[0]
I0120 13:26:03.426908 8400 main.go:141] libmachine: [stdout =====>] : 172.29.102.139
I0120 13:26:03.426984 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:26:03.436114 8400 main.go:141] libmachine: Using SSH client type: native
I0120 13:26:03.452227 8400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x855360] 0x857ea0 <nil> [] 0s} 172.29.102.139 22 <nil> <nil>}
I0120 13:26:03.452227 8400 main.go:141] libmachine: About to run SSH command:
hostname
I0120 13:26:03.592371 8400 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0120 13:26:03.592371 8400 buildroot.go:166] provisioning hostname "offline-docker-012800"
I0120 13:26:03.592371 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-012800 ).state
I0120 13:26:05.873587 8400 main.go:141] libmachine: [stdout =====>] : Running
I0120 13:26:05.873587 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:26:05.873587 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-012800 ).networkadapters[0]).ipaddresses[0]
I0120 13:26:08.597039 8400 main.go:141] libmachine: [stdout =====>] : 172.29.102.139
I0120 13:26:08.597212 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:26:08.604435 8400 main.go:141] libmachine: Using SSH client type: native
I0120 13:26:08.604732 8400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x855360] 0x857ea0 <nil> [] 0s} 172.29.102.139 22 <nil> <nil>}
I0120 13:26:08.604732 8400 main.go:141] libmachine: About to run SSH command:
sudo hostname offline-docker-012800 && echo "offline-docker-012800" | sudo tee /etc/hostname
I0120 13:26:08.776652 8400 main.go:141] libmachine: SSH cmd err, output: <nil>: offline-docker-012800
I0120 13:26:08.776652 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-012800 ).state
I0120 13:26:11.032351 8400 main.go:141] libmachine: [stdout =====>] : Running
I0120 13:26:11.032351 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:26:11.032993 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-012800 ).networkadapters[0]).ipaddresses[0]
I0120 13:26:13.619265 8400 main.go:141] libmachine: [stdout =====>] : 172.29.102.139
I0120 13:26:13.619932 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:26:13.626122 8400 main.go:141] libmachine: Using SSH client type: native
I0120 13:26:13.626276 8400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x855360] 0x857ea0 <nil> [] 0s} 172.29.102.139 22 <nil> <nil>}
I0120 13:26:13.626870 8400 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\soffline-docker-012800' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 offline-docker-012800/g' /etc/hosts;
else
echo '127.0.1.1 offline-docker-012800' | sudo tee -a /etc/hosts;
fi
fi
I0120 13:26:13.775132 8400 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0120 13:26:13.775132 8400 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
I0120 13:26:13.775132 8400 buildroot.go:174] setting up certificates
I0120 13:26:13.775132 8400 provision.go:84] configureAuth start
I0120 13:26:13.775132 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-012800 ).state
I0120 13:26:15.809347 8400 main.go:141] libmachine: [stdout =====>] : Running
I0120 13:26:15.809347 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:26:15.810233 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-012800 ).networkadapters[0]).ipaddresses[0]
I0120 13:26:18.323930 8400 main.go:141] libmachine: [stdout =====>] : 172.29.102.139
I0120 13:26:18.323930 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:26:18.324462 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-012800 ).state
I0120 13:26:20.481874 8400 main.go:141] libmachine: [stdout =====>] : Running
I0120 13:26:20.481874 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:26:20.482338 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-012800 ).networkadapters[0]).ipaddresses[0]
I0120 13:26:23.017755 8400 main.go:141] libmachine: [stdout =====>] : 172.29.102.139
I0120 13:26:23.017755 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:26:23.017755 8400 provision.go:143] copyHostCerts
I0120 13:26:23.018497 8400 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
I0120 13:26:23.018497 8400 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
I0120 13:26:23.019407 8400 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
I0120 13:26:23.021010 8400 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
I0120 13:26:23.021010 8400 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
I0120 13:26:23.021320 8400 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
I0120 13:26:23.022927 8400 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
I0120 13:26:23.022927 8400 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
I0120 13:26:23.023343 8400 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
I0120 13:26:23.024526 8400 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.offline-docker-012800 san=[127.0.0.1 172.29.102.139 localhost minikube offline-docker-012800]
I0120 13:26:23.269435 8400 provision.go:177] copyRemoteCerts
I0120 13:26:23.279597 8400 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0120 13:26:23.279597 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-012800 ).state
I0120 13:26:25.521364 8400 main.go:141] libmachine: [stdout =====>] : Running
I0120 13:26:25.521532 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:26:25.521646 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-012800 ).networkadapters[0]).ipaddresses[0]
I0120 13:26:28.222314 8400 main.go:141] libmachine: [stdout =====>] : 172.29.102.139
I0120 13:26:28.222314 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:26:28.222314 8400 sshutil.go:53] new ssh client: &{IP:172.29.102.139 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\offline-docker-012800\id_rsa Username:docker}
I0120 13:26:28.331690 8400 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0520438s)
I0120 13:26:28.332320 8400 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0120 13:26:28.377347 8400 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1233 bytes)
I0120 13:26:28.425783 8400 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0120 13:26:28.476139 8400 provision.go:87] duration metric: took 14.7008666s to configureAuth
I0120 13:26:28.476139 8400 buildroot.go:189] setting minikube options for container-runtime
I0120 13:26:28.477131 8400 config.go:182] Loaded profile config "offline-docker-012800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.0
I0120 13:26:28.477335 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-012800 ).state
I0120 13:26:30.783901 8400 main.go:141] libmachine: [stdout =====>] : Running
I0120 13:26:30.783901 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:26:30.784030 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-012800 ).networkadapters[0]).ipaddresses[0]
I0120 13:26:33.546340 8400 main.go:141] libmachine: [stdout =====>] : 172.29.102.139
I0120 13:26:33.546949 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:26:33.553296 8400 main.go:141] libmachine: Using SSH client type: native
I0120 13:26:33.553927 8400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x855360] 0x857ea0 <nil> [] 0s} 172.29.102.139 22 <nil> <nil>}
I0120 13:26:33.553927 8400 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0120 13:26:33.697773 8400 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0120 13:26:33.697773 8400 buildroot.go:70] root file system type: tmpfs
I0120 13:26:33.698187 8400 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0120 13:26:33.698298 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-012800 ).state
I0120 13:26:35.964204 8400 main.go:141] libmachine: [stdout =====>] : Running
I0120 13:26:35.964382 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:26:35.964513 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-012800 ).networkadapters[0]).ipaddresses[0]
I0120 13:26:38.715513 8400 main.go:141] libmachine: [stdout =====>] : 172.29.102.139
I0120 13:26:38.715513 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:26:38.722839 8400 main.go:141] libmachine: Using SSH client type: native
I0120 13:26:38.723525 8400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x855360] 0x857ea0 <nil> [] 0s} 172.29.102.139 22 <nil> <nil>}
I0120 13:26:38.723525 8400 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment="HTTP_PROXY=172.16.1.1:1"
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0120 13:26:38.898529 8400 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment=HTTP_PROXY=172.16.1.1:1
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0120 13:26:38.899068 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-012800 ).state
I0120 13:26:41.069062 8400 main.go:141] libmachine: [stdout =====>] : Running
I0120 13:26:41.069062 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:26:41.069833 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-012800 ).networkadapters[0]).ipaddresses[0]
I0120 13:26:43.560841 8400 main.go:141] libmachine: [stdout =====>] : 172.29.102.139
I0120 13:26:43.560925 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:26:43.566797 8400 main.go:141] libmachine: Using SSH client type: native
I0120 13:26:43.567727 8400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x855360] 0x857ea0 <nil> [] 0s} 172.29.102.139 22 <nil> <nil>}
I0120 13:26:43.567727 8400 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0120 13:26:45.845638 8400 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0120 13:26:45.845638 8400 machine.go:96] duration metric: took 47.3389206s to provisionDockerMachine
I0120 13:26:45.845638 8400 client.go:171] duration metric: took 2m2.9166546s to LocalClient.Create
I0120 13:26:45.845638 8400 start.go:167] duration metric: took 2m2.9166546s to libmachine.API.Create "offline-docker-012800"
I0120 13:26:45.845638 8400 start.go:293] postStartSetup for "offline-docker-012800" (driver="hyperv")
I0120 13:26:45.845638 8400 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0120 13:26:45.858836 8400 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0120 13:26:45.858836 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-012800 ).state
I0120 13:26:47.983550 8400 main.go:141] libmachine: [stdout =====>] : Running
I0120 13:26:47.983550 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:26:47.984286 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-012800 ).networkadapters[0]).ipaddresses[0]
I0120 13:26:50.561785 8400 main.go:141] libmachine: [stdout =====>] : 172.29.102.139
I0120 13:26:50.562227 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:26:50.562772 8400 sshutil.go:53] new ssh client: &{IP:172.29.102.139 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\offline-docker-012800\id_rsa Username:docker}
I0120 13:26:50.675003 8400 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8161211s)
I0120 13:26:50.693379 8400 ssh_runner.go:195] Run: cat /etc/os-release
I0120 13:26:50.699649 8400 info.go:137] Remote host: Buildroot 2023.02.9
I0120 13:26:50.699649 8400 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
I0120 13:26:50.700773 8400 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
I0120 13:26:50.702443 8400 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\78922.pem -> 78922.pem in /etc/ssl/certs
I0120 13:26:50.715782 8400 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0120 13:26:50.732281 8400 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\78922.pem --> /etc/ssl/certs/78922.pem (1708 bytes)
I0120 13:26:50.772238 8400 start.go:296] duration metric: took 4.9265528s for postStartSetup
I0120 13:26:50.775843 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-012800 ).state
I0120 13:26:52.878531 8400 main.go:141] libmachine: [stdout =====>] : Running
I0120 13:26:52.878531 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:26:52.878531 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-012800 ).networkadapters[0]).ipaddresses[0]
I0120 13:26:55.481062 8400 main.go:141] libmachine: [stdout =====>] : 172.29.102.139
I0120 13:26:55.481062 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:26:55.481880 8400 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\offline-docker-012800\config.json ...
I0120 13:26:55.485302 8400 start.go:128] duration metric: took 2m12.561123s to createHost
I0120 13:26:55.485302 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-012800 ).state
I0120 13:26:57.633850 8400 main.go:141] libmachine: [stdout =====>] : Running
I0120 13:26:57.634044 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:26:57.634044 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-012800 ).networkadapters[0]).ipaddresses[0]
I0120 13:27:00.178545 8400 main.go:141] libmachine: [stdout =====>] : 172.29.102.139
I0120 13:27:00.178616 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:27:00.187899 8400 main.go:141] libmachine: Using SSH client type: native
I0120 13:27:00.188654 8400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x855360] 0x857ea0 <nil> [] 0s} 172.29.102.139 22 <nil> <nil>}
I0120 13:27:00.188654 8400 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0120 13:27:00.331672 8400 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737379620.340382240
I0120 13:27:00.331672 8400 fix.go:216] guest clock: 1737379620.340382240
I0120 13:27:00.331672 8400 fix.go:229] Guest: 2025-01-20 13:27:00.34038224 +0000 UTC Remote: 2025-01-20 13:26:55.4853024 +0000 UTC m=+272.364220101 (delta=4.85507984s)
I0120 13:27:00.332202 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-012800 ).state
I0120 13:27:02.489330 8400 main.go:141] libmachine: [stdout =====>] : Running
I0120 13:27:02.490274 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:27:02.490274 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-012800 ).networkadapters[0]).ipaddresses[0]
I0120 13:27:05.041467 8400 main.go:141] libmachine: [stdout =====>] : 172.29.102.139
I0120 13:27:05.041467 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:27:05.049022 8400 main.go:141] libmachine: Using SSH client type: native
I0120 13:27:05.049798 8400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x855360] 0x857ea0 <nil> [] 0s} 172.29.102.139 22 <nil> <nil>}
I0120 13:27:05.049798 8400 main.go:141] libmachine: About to run SSH command:
sudo date -s @1737379620
I0120 13:27:05.208481 8400 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan 20 13:27:00 UTC 2025
I0120 13:27:05.208481 8400 fix.go:236] clock set: Mon Jan 20 13:27:00 UTC 2025
(err=<nil>)
I0120 13:27:05.208481 8400 start.go:83] releasing machines lock for "offline-docker-012800", held for 2m22.2842878s
I0120 13:27:05.209650 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-012800 ).state
I0120 13:27:07.442626 8400 main.go:141] libmachine: [stdout =====>] : Running
I0120 13:27:07.442626 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:27:07.443048 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-012800 ).networkadapters[0]).ipaddresses[0]
I0120 13:27:10.153376 8400 main.go:141] libmachine: [stdout =====>] : 172.29.102.139
I0120 13:27:10.153434 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:27:10.184840 8400 out.go:177] * Found network options:
I0120 13:27:10.197566 8400 out.go:177] - HTTP_PROXY=172.16.1.1:1
W0120 13:27:10.201641 8400 out.go:270] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (172.29.102.139).
! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (172.29.102.139).
I0120 13:27:10.216386 8400 out.go:177] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
I0120 13:27:10.269741 8400 out.go:177] - HTTP_PROXY=172.16.1.1:1
I0120 13:27:10.309850 8400 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
I0120 13:27:10.310036 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-012800 ).state
I0120 13:27:10.320905 8400 ssh_runner.go:195] Run: cat /version.json
I0120 13:27:10.320905 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-012800 ).state
I0120 13:27:12.629479 8400 main.go:141] libmachine: [stdout =====>] : Running
I0120 13:27:12.629761 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:27:12.629840 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-012800 ).networkadapters[0]).ipaddresses[0]
I0120 13:27:12.630464 8400 main.go:141] libmachine: [stdout =====>] : Running
I0120 13:27:12.630537 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:27:12.630537 8400 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-012800 ).networkadapters[0]).ipaddresses[0]
I0120 13:27:15.398824 8400 main.go:141] libmachine: [stdout =====>] : 172.29.102.139
I0120 13:27:15.398824 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:27:15.399565 8400 sshutil.go:53] new ssh client: &{IP:172.29.102.139 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\offline-docker-012800\id_rsa Username:docker}
I0120 13:27:15.419645 8400 main.go:141] libmachine: [stdout =====>] : 172.29.102.139
I0120 13:27:15.419645 8400 main.go:141] libmachine: [stderr =====>] :
I0120 13:27:15.419645 8400 sshutil.go:53] new ssh client: &{IP:172.29.102.139 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\offline-docker-012800\id_rsa Username:docker}
I0120 13:27:15.497488 8400 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.1875883s)
W0120 13:27:15.498024 8400 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
stdout:
stderr:
bash: line 1: curl.exe: command not found
I0120 13:27:15.516648 8400 ssh_runner.go:235] Completed: cat /version.json: (5.1956935s)
I0120 13:27:15.529450 8400 ssh_runner.go:195] Run: systemctl --version
I0120 13:27:15.550438 8400 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0120 13:27:15.558693 8400 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0120 13:27:15.569716 8400 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0120 13:27:15.598191 8400 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0120 13:27:15.598281 8400 start.go:495] detecting cgroup driver to use...
I0120 13:27:15.598379 8400 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
W0120 13:27:15.633912 8400 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
W0120 13:27:15.634041 8400 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
I0120 13:27:15.652912 8400 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0120 13:27:15.685156 8400 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0120 13:27:15.707083 8400 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0120 13:27:15.720664 8400 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0120 13:27:15.753685 8400 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0120 13:27:15.785046 8400 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0120 13:27:15.819246 8400 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0120 13:27:15.854229 8400 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0120 13:27:15.883932 8400 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0120 13:27:15.917662 8400 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0120 13:27:15.947455 8400 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0120 13:27:15.980441 8400 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0120 13:27:16.000017 8400 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0120 13:27:16.013221 8400 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0120 13:27:16.048194 8400 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0120 13:27:16.079818 8400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0120 13:27:16.301394 8400 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0120 13:27:16.333391 8400 start.go:495] detecting cgroup driver to use...
I0120 13:27:16.344406 8400 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0120 13:27:16.380434 8400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0120 13:27:16.418220 8400 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0120 13:27:16.476162 8400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0120 13:27:16.515341 8400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0120 13:27:16.553959 8400 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0120 13:27:16.619206 8400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0120 13:27:16.646629 8400 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0120 13:27:16.695251 8400 ssh_runner.go:195] Run: which cri-dockerd
I0120 13:27:16.718943 8400 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0120 13:27:16.737539 8400 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I0120 13:27:16.788288 8400 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0120 13:27:16.980917 8400 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0120 13:27:17.178503 8400 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0120 13:27:17.178878 8400 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0120 13:27:17.222369 8400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0120 13:27:17.419830 8400 ssh_runner.go:195] Run: sudo systemctl restart docker
I0120 13:28:18.529153 8400 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1085866s)
I0120 13:28:18.547586 8400 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
I0120 13:28:18.583479 8400 out.go:201]
W0120 13:28:18.588552 8400 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
sudo journalctl --no-pager -u docker:
-- stdout --
Jan 20 13:26:44 offline-docker-012800 systemd[1]: Starting Docker Application Container Engine...
Jan 20 13:26:44 offline-docker-012800 dockerd[667]: time="2025-01-20T13:26:44.155866832Z" level=info msg="Starting up"
Jan 20 13:26:44 offline-docker-012800 dockerd[667]: time="2025-01-20T13:26:44.156968760Z" level=info msg="containerd not running, starting managed containerd"
Jan 20 13:26:44 offline-docker-012800 dockerd[667]: time="2025-01-20T13:26:44.158306995Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=675
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.191701863Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.220852621Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.220954323Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.221019025Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.221041426Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.221121628Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.221247031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.221426036Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.221574640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.221669142Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.221687342Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.221782645Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.222106853Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.224828324Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.224948127Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.225200434Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.225291436Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.225397639Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.225646145Z" level=info msg="metadata content store policy set" policy=shared
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.274287610Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.274549416Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.274682520Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.274711921Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.274747022Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.274896926Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.275643345Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.275854750Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.275966553Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.275990054Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276018555Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276036655Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276051156Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276067656Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276084756Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276099657Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276126157Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276142258Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276172859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276263461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276349963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276373164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276387564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276403465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276417665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276432665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276447966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276472366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276619570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276643371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276676272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276705573Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276817675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276947579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276987980Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.277214886Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.277271187Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.277297088Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.277311588Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.277325289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.277348889Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.277464492Z" level=info msg="NRI interface is disabled by configuration."
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.277887103Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.277963305Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.278004906Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.278060008Z" level=info msg="containerd successfully booted in 0.087427s"
Jan 20 13:26:45 offline-docker-012800 dockerd[667]: time="2025-01-20T13:26:45.237808828Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Jan 20 13:26:45 offline-docker-012800 dockerd[667]: time="2025-01-20T13:26:45.275023335Z" level=info msg="Loading containers: start."
Jan 20 13:26:45 offline-docker-012800 dockerd[667]: time="2025-01-20T13:26:45.426698831Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
Jan 20 13:26:45 offline-docker-012800 dockerd[667]: time="2025-01-20T13:26:45.683956062Z" level=info msg="Loading containers: done."
Jan 20 13:26:45 offline-docker-012800 dockerd[667]: time="2025-01-20T13:26:45.713374018Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
Jan 20 13:26:45 offline-docker-012800 dockerd[667]: time="2025-01-20T13:26:45.713410519Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
Jan 20 13:26:45 offline-docker-012800 dockerd[667]: time="2025-01-20T13:26:45.713433719Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
Jan 20 13:26:45 offline-docker-012800 dockerd[667]: time="2025-01-20T13:26:45.713749428Z" level=info msg="Daemon has completed initialization"
Jan 20 13:26:45 offline-docker-012800 dockerd[667]: time="2025-01-20T13:26:45.852318587Z" level=info msg="API listen on /var/run/docker.sock"
Jan 20 13:26:45 offline-docker-012800 dockerd[667]: time="2025-01-20T13:26:45.852444891Z" level=info msg="API listen on [::]:2376"
Jan 20 13:26:45 offline-docker-012800 systemd[1]: Started Docker Application Container Engine.
Jan 20 13:27:17 offline-docker-012800 systemd[1]: Stopping Docker Application Container Engine...
Jan 20 13:27:17 offline-docker-012800 dockerd[667]: time="2025-01-20T13:27:17.454708527Z" level=info msg="Processing signal 'terminated'"
Jan 20 13:27:17 offline-docker-012800 dockerd[667]: time="2025-01-20T13:27:17.456365029Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Jan 20 13:27:17 offline-docker-012800 dockerd[667]: time="2025-01-20T13:27:17.456498530Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Jan 20 13:27:17 offline-docker-012800 dockerd[667]: time="2025-01-20T13:27:17.456556630Z" level=info msg="Daemon shutdown complete"
Jan 20 13:27:17 offline-docker-012800 dockerd[667]: time="2025-01-20T13:27:17.456662930Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Jan 20 13:27:18 offline-docker-012800 systemd[1]: docker.service: Deactivated successfully.
Jan 20 13:27:18 offline-docker-012800 systemd[1]: Stopped Docker Application Container Engine.
Jan 20 13:27:18 offline-docker-012800 systemd[1]: Starting Docker Application Container Engine...
Jan 20 13:27:18 offline-docker-012800 dockerd[1087]: time="2025-01-20T13:27:18.511143923Z" level=info msg="Starting up"
Jan 20 13:28:18 offline-docker-012800 dockerd[1087]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jan 20 13:28:18 offline-docker-012800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jan 20 13:28:18 offline-docker-012800 systemd[1]: docker.service: Failed with result 'exit-code'.
Jan 20 13:28:18 offline-docker-012800 systemd[1]: Failed to start Docker Application Container Engine.
-- /stdout --
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
sudo journalctl --no-pager -u docker:
-- stdout --
Jan 20 13:26:44 offline-docker-012800 systemd[1]: Starting Docker Application Container Engine...
Jan 20 13:26:44 offline-docker-012800 dockerd[667]: time="2025-01-20T13:26:44.155866832Z" level=info msg="Starting up"
Jan 20 13:26:44 offline-docker-012800 dockerd[667]: time="2025-01-20T13:26:44.156968760Z" level=info msg="containerd not running, starting managed containerd"
Jan 20 13:26:44 offline-docker-012800 dockerd[667]: time="2025-01-20T13:26:44.158306995Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=675
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.191701863Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.220852621Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.220954323Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.221019025Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.221041426Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.221121628Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.221247031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.221426036Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.221574640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.221669142Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.221687342Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.221782645Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.222106853Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.224828324Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.224948127Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.225200434Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.225291436Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.225397639Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.225646145Z" level=info msg="metadata content store policy set" policy=shared
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.274287610Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.274549416Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.274682520Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.274711921Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.274747022Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.274896926Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.275643345Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.275854750Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.275966553Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.275990054Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276018555Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276036655Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276051156Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276067656Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276084756Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276099657Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276126157Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276142258Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276172859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276263461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276349963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276373164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276387564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276403465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276417665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276432665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276447966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276472366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276619570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276643371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276676272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276705573Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276817675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276947579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.276987980Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.277214886Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.277271187Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.277297088Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.277311588Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.277325289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.277348889Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.277464492Z" level=info msg="NRI interface is disabled by configuration."
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.277887103Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.277963305Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.278004906Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Jan 20 13:26:44 offline-docker-012800 dockerd[675]: time="2025-01-20T13:26:44.278060008Z" level=info msg="containerd successfully booted in 0.087427s"
Jan 20 13:26:45 offline-docker-012800 dockerd[667]: time="2025-01-20T13:26:45.237808828Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Jan 20 13:26:45 offline-docker-012800 dockerd[667]: time="2025-01-20T13:26:45.275023335Z" level=info msg="Loading containers: start."
Jan 20 13:26:45 offline-docker-012800 dockerd[667]: time="2025-01-20T13:26:45.426698831Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
Jan 20 13:26:45 offline-docker-012800 dockerd[667]: time="2025-01-20T13:26:45.683956062Z" level=info msg="Loading containers: done."
Jan 20 13:26:45 offline-docker-012800 dockerd[667]: time="2025-01-20T13:26:45.713374018Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
Jan 20 13:26:45 offline-docker-012800 dockerd[667]: time="2025-01-20T13:26:45.713410519Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
Jan 20 13:26:45 offline-docker-012800 dockerd[667]: time="2025-01-20T13:26:45.713433719Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
Jan 20 13:26:45 offline-docker-012800 dockerd[667]: time="2025-01-20T13:26:45.713749428Z" level=info msg="Daemon has completed initialization"
Jan 20 13:26:45 offline-docker-012800 dockerd[667]: time="2025-01-20T13:26:45.852318587Z" level=info msg="API listen on /var/run/docker.sock"
Jan 20 13:26:45 offline-docker-012800 dockerd[667]: time="2025-01-20T13:26:45.852444891Z" level=info msg="API listen on [::]:2376"
Jan 20 13:26:45 offline-docker-012800 systemd[1]: Started Docker Application Container Engine.
Jan 20 13:27:17 offline-docker-012800 systemd[1]: Stopping Docker Application Container Engine...
Jan 20 13:27:17 offline-docker-012800 dockerd[667]: time="2025-01-20T13:27:17.454708527Z" level=info msg="Processing signal 'terminated'"
Jan 20 13:27:17 offline-docker-012800 dockerd[667]: time="2025-01-20T13:27:17.456365029Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Jan 20 13:27:17 offline-docker-012800 dockerd[667]: time="2025-01-20T13:27:17.456498530Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Jan 20 13:27:17 offline-docker-012800 dockerd[667]: time="2025-01-20T13:27:17.456556630Z" level=info msg="Daemon shutdown complete"
Jan 20 13:27:17 offline-docker-012800 dockerd[667]: time="2025-01-20T13:27:17.456662930Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Jan 20 13:27:18 offline-docker-012800 systemd[1]: docker.service: Deactivated successfully.
Jan 20 13:27:18 offline-docker-012800 systemd[1]: Stopped Docker Application Container Engine.
Jan 20 13:27:18 offline-docker-012800 systemd[1]: Starting Docker Application Container Engine...
Jan 20 13:27:18 offline-docker-012800 dockerd[1087]: time="2025-01-20T13:27:18.511143923Z" level=info msg="Starting up"
Jan 20 13:28:18 offline-docker-012800 dockerd[1087]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jan 20 13:28:18 offline-docker-012800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jan 20 13:28:18 offline-docker-012800 systemd[1]: docker.service: Failed with result 'exit-code'.
Jan 20 13:28:18 offline-docker-012800 systemd[1]: Failed to start Docker Application Container Engine.
-- /stdout --
W0120 13:28:18.588766 8400 out.go:270] *
*
W0120 13:28:18.590519 8400 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0120 13:28:18.594484 8400 out.go:201]
** /stderr **
aab_offline_test.go:58: out/minikube-windows-amd64.exe start -p offline-docker-012800 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv failed: exit status 90
panic.go:629: *** TestOffline FAILED at 2025-01-20 13:28:18.9217464 +0000 UTC m=+10263.242458701
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-windows-amd64.exe status --format={{.Host}} -p offline-docker-012800 -n offline-docker-012800
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p offline-docker-012800 -n offline-docker-012800: exit status 6 (12.7438048s)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E0120 13:28:31.580382 12096 status.go:458] kubeconfig endpoint: get endpoint: "offline-docker-012800" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "offline-docker-012800" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "offline-docker-012800" profile ...
helpers_test.go:178: (dbg) Run: out/minikube-windows-amd64.exe delete -p offline-docker-012800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-012800: (1m1.5472989s)
--- FAIL: TestOffline (430.12s)